00:00:00.001 Started by upstream project "autotest-per-patch" build number 132428 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.121 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.179 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.654 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.668 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.679 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.679 > git config core.sparsecheckout # timeout=10 00:00:03.691 > git read-tree -mu HEAD # timeout=10 00:00:03.706 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.728 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.728 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.832 [Pipeline] Start of Pipeline 00:00:03.848 [Pipeline] library 00:00:03.850 Loading library shm_lib@master 00:00:03.851 Library shm_lib@master is cached. Copying from home. 00:00:03.871 [Pipeline] node 00:00:03.881 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:03.883 [Pipeline] { 00:00:03.893 [Pipeline] catchError 00:00:03.894 [Pipeline] { 00:00:03.907 [Pipeline] wrap 00:00:03.915 [Pipeline] { 00:00:03.924 [Pipeline] stage 00:00:03.925 [Pipeline] { (Prologue) 00:00:03.944 [Pipeline] echo 00:00:03.946 Node: VM-host-WFP7 00:00:03.952 [Pipeline] cleanWs 00:00:03.963 [WS-CLEANUP] Deleting project workspace... 00:00:03.963 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.972 [WS-CLEANUP] done 00:00:04.174 [Pipeline] setCustomBuildProperty 00:00:04.242 [Pipeline] httpRequest 00:00:04.564 [Pipeline] echo 00:00:04.566 Sorcerer 10.211.164.20 is alive 00:00:04.575 [Pipeline] retry 00:00:04.576 [Pipeline] { 00:00:04.590 [Pipeline] httpRequest 00:00:04.594 HttpMethod: GET 00:00:04.595 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.595 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.597 Response Code: HTTP/1.1 200 OK 00:00:04.598 Success: Status code 200 is in the accepted range: 200,404 00:00:04.598 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.031 [Pipeline] } 00:00:05.044 [Pipeline] // retry 00:00:05.050 [Pipeline] sh 00:00:05.326 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.339 [Pipeline] httpRequest 00:00:05.957 [Pipeline] echo 00:00:05.958 Sorcerer 10.211.164.20 is alive 00:00:05.967 [Pipeline] retry 00:00:05.969 [Pipeline] { 00:00:05.981 [Pipeline] httpRequest 00:00:05.986 HttpMethod: GET 00:00:05.987 URL: http://10.211.164.20/packages/spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:05.987 Sending request to url: http://10.211.164.20/packages/spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:05.988 Response Code: HTTP/1.1 200 OK 00:00:05.989 Success: Status code 200 is in the accepted range: 200,404 00:00:05.989 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:32.917 [Pipeline] } 00:00:32.934 [Pipeline] // retry 00:00:32.941 [Pipeline] sh 00:00:33.225 + tar --no-same-owner -xf spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:35.779 [Pipeline] sh 00:00:36.062 + git -C spdk log --oneline -n5 00:00:36.063 09ac735c8 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:00:36.063 c1691a126 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:00:36.063 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:00:36.063 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:00:36.063 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:00:36.083 [Pipeline] writeFile 00:00:36.099 [Pipeline] sh 00:00:36.429 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:36.442 [Pipeline] sh 00:00:36.726 + cat autorun-spdk.conf 00:00:36.726 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.726 SPDK_RUN_ASAN=1 00:00:36.726 SPDK_RUN_UBSAN=1 00:00:36.726 SPDK_TEST_RAID=1 00:00:36.726 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.733 RUN_NIGHTLY=0 00:00:36.735 [Pipeline] } 00:00:36.749 [Pipeline] // stage 00:00:36.765 [Pipeline] stage 00:00:36.767 [Pipeline] { (Run VM) 00:00:36.781 [Pipeline] sh 00:00:37.065 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:37.065 + echo 'Start stage prepare_nvme.sh' 00:00:37.065 Start stage prepare_nvme.sh 00:00:37.065 + [[ -n 7 ]] 00:00:37.065 + disk_prefix=ex7 00:00:37.065 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:00:37.065 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:00:37.065 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:00:37.065 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.065 ++ SPDK_RUN_ASAN=1 00:00:37.065 ++ SPDK_RUN_UBSAN=1 00:00:37.065 ++ SPDK_TEST_RAID=1 00:00:37.065 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.065 ++ RUN_NIGHTLY=0 00:00:37.065 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:00:37.065 + nvme_files=() 00:00:37.065 + declare -A nvme_files 00:00:37.065 + backend_dir=/var/lib/libvirt/images/backends 00:00:37.065 + nvme_files['nvme.img']=5G 00:00:37.065 + nvme_files['nvme-cmb.img']=5G 00:00:37.065 + nvme_files['nvme-multi0.img']=4G 00:00:37.065 + nvme_files['nvme-multi1.img']=4G 00:00:37.065 + nvme_files['nvme-multi2.img']=4G 00:00:37.065 + nvme_files['nvme-openstack.img']=8G 00:00:37.065 + nvme_files['nvme-zns.img']=5G 00:00:37.066 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:37.066 + (( SPDK_TEST_FTL == 1 )) 00:00:37.066 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:37.066 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:37.066 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:37.066 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:37.066 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:37.066 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:37.066 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:37.066 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.066 + for nvme in "${!nvme_files[@]}" 00:00:37.066 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:37.326 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.326 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:37.326 + echo 'End stage prepare_nvme.sh' 00:00:37.326 End stage prepare_nvme.sh 00:00:37.338 [Pipeline] sh 00:00:37.622 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:37.622 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:00:37.622 00:00:37.622 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:00:37.622 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:00:37.622 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:00:37.622 HELP=0 00:00:37.622 DRY_RUN=0 00:00:37.622 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:37.622 NVME_DISKS_TYPE=nvme,nvme, 00:00:37.622 NVME_AUTO_CREATE=0 00:00:37.622 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:37.622 NVME_CMB=,, 00:00:37.622 NVME_PMR=,, 00:00:37.622 NVME_ZNS=,, 00:00:37.622 NVME_MS=,, 00:00:37.622 NVME_FDP=,, 00:00:37.622 SPDK_VAGRANT_DISTRO=fedora39 00:00:37.622 SPDK_VAGRANT_VMCPU=10 00:00:37.622 SPDK_VAGRANT_VMRAM=12288 00:00:37.622 SPDK_VAGRANT_PROVIDER=libvirt 00:00:37.622 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:37.622 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:37.622 SPDK_OPENSTACK_NETWORK=0 00:00:37.622 VAGRANT_PACKAGE_BOX=0 00:00:37.622 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:37.622 FORCE_DISTRO=true 00:00:37.622 VAGRANT_BOX_VERSION= 00:00:37.622 EXTRA_VAGRANTFILES= 00:00:37.622 NIC_MODEL=virtio 00:00:37.622 00:00:37.622 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:00:37.622 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:00:40.159 Bringing machine 'default' up with 'libvirt' provider... 00:00:40.419 ==> default: Creating image (snapshot of base box volume). 00:00:40.419 ==> default: Creating domain with the following settings... 00:00:40.419 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732124107_5226bf132ac8b1e0275a 00:00:40.419 ==> default: -- Domain type: kvm 00:00:40.419 ==> default: -- Cpus: 10 00:00:40.419 ==> default: -- Feature: acpi 00:00:40.419 ==> default: -- Feature: apic 00:00:40.419 ==> default: -- Feature: pae 00:00:40.419 ==> default: -- Memory: 12288M 00:00:40.419 ==> default: -- Memory Backing: hugepages: 00:00:40.419 ==> default: -- Management MAC: 00:00:40.419 ==> default: -- Loader: 00:00:40.419 ==> default: -- Nvram: 00:00:40.419 ==> default: -- Base box: spdk/fedora39 00:00:40.419 ==> default: -- Storage pool: default 00:00:40.419 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732124107_5226bf132ac8b1e0275a.img (20G) 00:00:40.419 ==> default: -- Volume Cache: default 00:00:40.419 ==> default: -- Kernel: 00:00:40.419 ==> default: -- Initrd: 00:00:40.419 ==> default: -- Graphics Type: vnc 00:00:40.419 ==> default: -- Graphics Port: -1 00:00:40.419 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.419 ==> default: -- Graphics Password: Not defined 00:00:40.419 ==> default: -- Video Type: cirrus 00:00:40.419 ==> default: -- Video VRAM: 9216 00:00:40.419 ==> default: -- Sound Type: 00:00:40.419 ==> default: -- Keymap: en-us 00:00:40.419 ==> default: -- TPM Path: 00:00:40.419 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.419 ==> default: -- Command line args: 00:00:40.419 ==> default: -> value=-device, 00:00:40.419 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:40.419 ==> default: -> value=-drive, 00:00:40.419 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:40.419 ==> default: -> value=-device, 00:00:40.419 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.419 ==> default: -> value=-device, 00:00:40.419 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:40.419 ==> default: -> value=-drive, 00:00:40.419 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:40.419 ==> default: -> value=-device, 00:00:40.419 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.419 ==> default: -> value=-drive, 00:00:40.419 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:40.419 ==> default: -> value=-device, 00:00:40.419 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.419 ==> default: -> value=-drive, 00:00:40.419 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:40.419 ==> default: -> value=-device, 00:00:40.419 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.678 ==> default: Creating shared folders metadata... 00:00:40.678 ==> default: Starting domain. 00:00:42.657 ==> default: Waiting for domain to get an IP address... 00:00:57.543 ==> default: Waiting for SSH to become available... 00:00:58.921 ==> default: Configuring and enabling network interfaces... 00:01:05.492 default: SSH address: 192.168.121.129:22 00:01:05.492 default: SSH username: vagrant 00:01:05.492 default: SSH auth method: private key 00:01:08.031 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:16.160 ==> default: Mounting SSHFS shared folder... 00:01:18.697 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:18.697 ==> default: Checking Mount.. 00:01:20.080 ==> default: Folder Successfully Mounted! 00:01:20.080 ==> default: Running provisioner: file... 00:01:21.019 default: ~/.gitconfig => .gitconfig 00:01:21.587 00:01:21.587 SUCCESS! 00:01:21.587 00:01:21.587 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:01:21.588 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.588 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:01:21.588 00:01:21.597 [Pipeline] } 00:01:21.613 [Pipeline] // stage 00:01:21.623 [Pipeline] dir 00:01:21.623 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:01:21.625 [Pipeline] { 00:01:21.639 [Pipeline] catchError 00:01:21.642 [Pipeline] { 00:01:21.655 [Pipeline] sh 00:01:21.955 + vagrant ssh-config --host vagrant 00:01:21.955 + sed -ne /^Host/,$p 00:01:21.955 + tee ssh_conf 00:01:24.487 Host vagrant 00:01:24.487 HostName 192.168.121.129 00:01:24.487 User vagrant 00:01:24.487 Port 22 00:01:24.487 UserKnownHostsFile /dev/null 00:01:24.487 StrictHostKeyChecking no 00:01:24.487 PasswordAuthentication no 00:01:24.487 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:24.487 IdentitiesOnly yes 00:01:24.487 LogLevel FATAL 00:01:24.487 ForwardAgent yes 00:01:24.487 ForwardX11 yes 00:01:24.487 00:01:24.501 [Pipeline] withEnv 00:01:24.503 [Pipeline] { 00:01:24.519 [Pipeline] sh 00:01:24.805 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:24.805 source /etc/os-release 00:01:24.805 [[ -e /image.version ]] && img=$(< /image.version) 00:01:24.805 # Minimal, systemd-like check. 00:01:24.805 if [[ -e /.dockerenv ]]; then 00:01:24.805 # Clear garbage from the node's name: 00:01:24.805 # agt-er_autotest_547-896 -> autotest_547-896 00:01:24.805 # $HOSTNAME is the actual container id 00:01:24.805 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:24.805 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:24.805 # We can assume this is a mount from a host where container is running, 00:01:24.805 # so fetch its hostname to easily identify the target swarm worker. 00:01:24.805 container="$(< /etc/hostname) ($agent)" 00:01:24.805 else 00:01:24.805 # Fallback 00:01:24.805 container=$agent 00:01:24.805 fi 00:01:24.805 fi 00:01:24.805 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:24.805 00:01:25.120 [Pipeline] } 00:01:25.140 [Pipeline] // withEnv 00:01:25.150 [Pipeline] setCustomBuildProperty 00:01:25.167 [Pipeline] stage 00:01:25.170 [Pipeline] { (Tests) 00:01:25.190 [Pipeline] sh 00:01:25.474 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.746 [Pipeline] sh 00:01:26.032 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:26.308 [Pipeline] timeout 00:01:26.309 Timeout set to expire in 1 hr 30 min 00:01:26.310 [Pipeline] { 00:01:26.326 [Pipeline] sh 00:01:26.610 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:27.181 HEAD is now at 09ac735c8 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:27.194 [Pipeline] sh 00:01:27.477 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:27.752 [Pipeline] sh 00:01:28.077 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:28.353 [Pipeline] sh 00:01:28.636 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:28.896 ++ readlink -f spdk_repo 00:01:28.896 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:28.896 + [[ -n /home/vagrant/spdk_repo ]] 00:01:28.896 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:28.896 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:28.896 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:28.896 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:28.896 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:28.896 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:28.896 + cd /home/vagrant/spdk_repo 00:01:28.896 + source /etc/os-release 00:01:28.896 ++ NAME='Fedora Linux' 00:01:28.896 ++ VERSION='39 (Cloud Edition)' 00:01:28.896 ++ ID=fedora 00:01:28.896 ++ VERSION_ID=39 00:01:28.896 ++ VERSION_CODENAME= 00:01:28.896 ++ PLATFORM_ID=platform:f39 00:01:28.896 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:28.896 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.896 ++ LOGO=fedora-logo-icon 00:01:28.896 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:28.896 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.896 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:28.896 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.896 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.896 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.896 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:28.896 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.896 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:28.896 ++ SUPPORT_END=2024-11-12 00:01:28.896 ++ VARIANT='Cloud Edition' 00:01:28.896 ++ VARIANT_ID=cloud 00:01:28.896 + uname -a 00:01:28.896 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:28.896 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:29.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:29.464 Hugepages 00:01:29.464 node hugesize free / total 00:01:29.464 node0 1048576kB 0 / 0 00:01:29.464 node0 2048kB 0 / 0 00:01:29.464 00:01:29.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.464 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:29.464 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:29.464 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:29.464 + rm -f /tmp/spdk-ld-path 00:01:29.464 + source autorun-spdk.conf 00:01:29.464 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.464 ++ SPDK_RUN_ASAN=1 00:01:29.465 ++ SPDK_RUN_UBSAN=1 00:01:29.465 ++ SPDK_TEST_RAID=1 00:01:29.465 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.465 ++ RUN_NIGHTLY=0 00:01:29.465 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.465 + [[ -n '' ]] 00:01:29.465 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:29.465 + for M in /var/spdk/build-*-manifest.txt 00:01:29.465 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:29.465 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.465 + for M in /var/spdk/build-*-manifest.txt 00:01:29.465 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.465 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.465 + for M in /var/spdk/build-*-manifest.txt 00:01:29.465 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.465 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.465 ++ uname 00:01:29.465 + [[ Linux == \L\i\n\u\x ]] 00:01:29.465 + sudo dmesg -T 00:01:29.465 + sudo dmesg --clear 00:01:29.725 + dmesg_pid=5427 00:01:29.725 + [[ Fedora Linux == FreeBSD ]] 00:01:29.725 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.725 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.725 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.725 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.725 + sudo dmesg -Tw 00:01:29.725 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.725 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.725 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.725 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.725 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.725 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.725 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.725 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.725 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.725 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.725 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.725 17:35:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:29.725 17:35:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.725 17:35:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.725 17:35:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:29.725 17:35:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:29.725 17:35:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:29.725 17:35:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.725 17:35:56 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:29.725 17:35:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:29.725 17:35:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.725 17:35:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:29.725 17:35:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:29.725 17:35:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:29.726 17:35:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.726 17:35:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.726 17:35:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.726 17:35:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.726 17:35:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.726 17:35:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.726 17:35:56 -- paths/export.sh@5 -- $ export PATH 00:01:29.726 17:35:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.726 17:35:56 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:29.726 17:35:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:29.726 17:35:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732124156.XXXXXX 00:01:29.726 17:35:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732124156.z5C7fZ 00:01:29.726 17:35:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:29.726 17:35:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:29.726 17:35:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:29.726 17:35:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:29.726 17:35:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.726 17:35:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:29.726 17:35:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:29.726 17:35:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.726 17:35:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:29.726 17:35:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:29.726 17:35:56 -- pm/common@17 -- $ local monitor 00:01:29.726 17:35:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.726 17:35:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.726 17:35:56 -- pm/common@25 -- $ sleep 1 00:01:29.726 17:35:56 -- pm/common@21 -- $ date +%s 00:01:29.726 17:35:56 -- pm/common@21 -- $ date +%s 00:01:29.726 17:35:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732124156 00:01:29.726 17:35:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732124156 00:01:29.985 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732124156_collect-cpu-load.pm.log 00:01:29.985 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732124156_collect-vmstat.pm.log 00:01:30.924 17:35:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:30.924 17:35:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.924 17:35:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.924 17:35:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:30.924 17:35:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.924 Wed Nov 20 05:35:57 PM UTC 2024 00:01:30.924 17:35:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.924 v25.01-pre-227-g09ac735c8 00:01:30.924 17:35:57 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:30.924 17:35:57 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:30.924 17:35:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:30.924 17:35:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.924 17:35:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.924 ************************************ 00:01:30.924 START TEST asan 00:01:30.924 ************************************ 00:01:30.924 using asan 00:01:30.924 17:35:57 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:30.924 00:01:30.924 real 0m0.000s 00:01:30.924 user 0m0.000s 00:01:30.924 sys 0m0.000s 00:01:30.924 17:35:57 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:30.924 17:35:57 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.924 ************************************ 00:01:30.924 END TEST asan 00:01:30.924 ************************************ 00:01:30.924 17:35:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.924 17:35:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.924 17:35:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:30.924 17:35:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.924 17:35:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.924 ************************************ 00:01:30.924 START TEST ubsan 00:01:30.924 ************************************ 00:01:30.924 using ubsan 00:01:30.924 17:35:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:30.924 00:01:30.924 real 0m0.000s 00:01:30.924 user 0m0.000s 00:01:30.924 sys 0m0.000s 00:01:30.924 17:35:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:30.924 17:35:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.924 ************************************ 00:01:30.924 END TEST ubsan 00:01:30.924 ************************************ 00:01:30.924 17:35:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.924 17:35:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.924 17:35:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.924 17:35:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.924 17:35:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.924 17:35:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.924 17:35:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.924 17:35:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.924 17:35:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:31.183 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:31.183 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:31.753 Using 'verbs' RDMA provider 00:01:47.574 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:05.721 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:05.721 Creating mk/config.mk...done. 00:02:05.721 Creating mk/cc.flags.mk...done. 00:02:05.721 Type 'make' to build. 00:02:05.721 17:36:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:05.721 17:36:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.721 17:36:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.721 17:36:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.721 ************************************ 00:02:05.721 START TEST make 00:02:05.721 ************************************ 00:02:05.721 17:36:31 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:05.721 make[1]: Nothing to be done for 'all'. 00:02:17.963 The Meson build system 00:02:17.963 Version: 1.5.0 00:02:17.963 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:17.963 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:17.963 Build type: native build 00:02:17.963 Program cat found: YES (/usr/bin/cat) 00:02:17.963 Project name: DPDK 00:02:17.963 Project version: 24.03.0 00:02:17.963 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.963 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.963 Host machine cpu family: x86_64 00:02:17.963 Host machine cpu: x86_64 00:02:17.963 Message: ## Building in Developer Mode ## 00:02:17.963 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.963 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.963 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.963 Program python3 found: YES (/usr/bin/python3) 00:02:17.963 Program cat found: YES (/usr/bin/cat) 00:02:17.963 Compiler for C supports arguments -march=native: YES 00:02:17.963 Checking for size of "void *" : 8 00:02:17.963 Checking for size of "void *" : 8 (cached) 00:02:17.963 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:17.963 Library m found: YES 00:02:17.963 Library numa found: YES 00:02:17.963 Has header "numaif.h" : YES 00:02:17.963 Library fdt found: NO 00:02:17.963 Library execinfo found: NO 00:02:17.963 Has header "execinfo.h" : YES 00:02:17.963 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.963 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.963 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.963 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.964 Run-time dependency openssl found: YES 3.1.1 00:02:17.964 Run-time dependency libpcap found: YES 1.10.4 00:02:17.964 Has header "pcap.h" with dependency libpcap: YES 00:02:17.964 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.964 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.964 Compiler for C supports arguments -Wformat: YES 00:02:17.964 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.964 Compiler for C supports arguments -Wformat-security: NO 00:02:17.964 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.964 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.964 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.964 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.964 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.964 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.964 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.964 Compiler for C supports arguments -Wundef: YES 00:02:17.964 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.964 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.964 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.964 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.964 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.964 Program objdump found: YES (/usr/bin/objdump) 00:02:17.964 Compiler for C supports arguments -mavx512f: YES 00:02:17.964 Checking if "AVX512 checking" compiles: YES 00:02:17.964 Fetching value of define "__SSE4_2__" : 1 00:02:17.964 Fetching value of define "__AES__" : 1 00:02:17.964 Fetching value of define "__AVX__" : 1 00:02:17.964 Fetching value of define "__AVX2__" : 1 00:02:17.964 Fetching value of define "__AVX512BW__" : 1 00:02:17.964 Fetching value of define "__AVX512CD__" : 1 00:02:17.964 Fetching value of define "__AVX512DQ__" : 1 00:02:17.964 Fetching value of define "__AVX512F__" : 1 00:02:17.964 Fetching value of define "__AVX512VL__" : 1 00:02:17.964 Fetching value of define "__PCLMUL__" : 1 00:02:17.964 Fetching value of define "__RDRND__" : 1 00:02:17.964 Fetching value of define "__RDSEED__" : 1 00:02:17.964 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.964 Fetching value of define "__znver1__" : (undefined) 00:02:17.964 Fetching value of define "__znver2__" : (undefined) 00:02:17.964 Fetching value of define "__znver3__" : (undefined) 00:02:17.964 Fetching value of define "__znver4__" : (undefined) 00:02:17.964 Library asan found: YES 00:02:17.964 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.964 Message: lib/log: Defining dependency "log" 00:02:17.964 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.964 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.964 Library rt found: YES 00:02:17.964 Checking for function "getentropy" : NO 00:02:17.964 Message: lib/eal: Defining dependency "eal" 00:02:17.964 Message: lib/ring: Defining dependency "ring" 00:02:17.964 Message: lib/rcu: Defining dependency "rcu" 00:02:17.964 Message: lib/mempool: Defining dependency "mempool" 00:02:17.964 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.964 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.964 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.964 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.964 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.964 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.964 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.964 Compiler for C supports arguments -mpclmul: YES 00:02:17.964 Compiler for C supports arguments -maes: YES 00:02:17.964 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.964 Compiler for C supports arguments -mavx512bw: YES 00:02:17.964 Compiler for C supports arguments -mavx512dq: YES 00:02:17.964 Compiler for C supports arguments -mavx512vl: YES 00:02:17.964 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.964 Compiler for C supports arguments -mavx2: YES 00:02:17.964 Compiler for C supports arguments -mavx: YES 00:02:17.964 Message: lib/net: Defining dependency "net" 00:02:17.964 Message: lib/meter: Defining dependency "meter" 00:02:17.964 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.964 Message: lib/pci: Defining dependency "pci" 00:02:17.964 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.964 Message: lib/hash: Defining dependency "hash" 00:02:17.964 Message: lib/timer: Defining dependency "timer" 00:02:17.964 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.964 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.964 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.964 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.964 Message: lib/power: Defining dependency "power" 00:02:17.964 Message: lib/reorder: Defining dependency "reorder" 00:02:17.964 Message: lib/security: Defining dependency "security" 00:02:17.964 Has header "linux/userfaultfd.h" : YES 00:02:17.964 Has header "linux/vduse.h" : YES 00:02:17.964 Message: lib/vhost: Defining dependency "vhost" 00:02:17.964 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.964 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.964 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.964 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.964 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.964 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.964 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.964 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.964 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.964 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.964 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.964 Configuring doxy-api-html.conf using configuration 00:02:17.964 Configuring doxy-api-man.conf using configuration 00:02:17.964 Program mandb found: YES (/usr/bin/mandb) 00:02:17.964 Program sphinx-build found: NO 00:02:17.964 Configuring rte_build_config.h using configuration 00:02:17.964 Message: 00:02:17.964 ================= 00:02:17.964 Applications Enabled 00:02:17.964 ================= 00:02:17.964 00:02:17.964 apps: 00:02:17.964 00:02:17.964 00:02:17.964 Message: 00:02:17.964 ================= 00:02:17.964 Libraries Enabled 00:02:17.964 ================= 00:02:17.964 00:02:17.964 libs: 00:02:17.964 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.964 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.964 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.964 00:02:17.964 Message: 00:02:17.964 =============== 00:02:17.964 Drivers Enabled 00:02:17.964 =============== 00:02:17.964 00:02:17.964 common: 00:02:17.964 00:02:17.964 bus: 00:02:17.964 pci, vdev, 00:02:17.964 mempool: 00:02:17.964 ring, 00:02:17.964 dma: 00:02:17.964 00:02:17.964 net: 00:02:17.964 00:02:17.964 crypto: 00:02:17.964 00:02:17.964 compress: 00:02:17.964 00:02:17.964 vdpa: 00:02:17.964 00:02:17.964 00:02:17.964 Message: 00:02:17.964 ================= 00:02:17.964 Content Skipped 00:02:17.964 ================= 00:02:17.964 00:02:17.964 apps: 00:02:17.964 dumpcap: explicitly disabled via build config 00:02:17.964 graph: explicitly disabled via build config 00:02:17.964 pdump: explicitly disabled via build config 00:02:17.964 proc-info: explicitly disabled via build config 00:02:17.964 test-acl: explicitly disabled via build config 00:02:17.964 test-bbdev: explicitly disabled via build config 00:02:17.964 test-cmdline: explicitly disabled via build config 00:02:17.964 test-compress-perf: explicitly disabled via build config 00:02:17.964 test-crypto-perf: explicitly disabled via build config 00:02:17.964 test-dma-perf: explicitly disabled via build config 00:02:17.964 test-eventdev: explicitly disabled via build config 00:02:17.964 test-fib: explicitly disabled via build config 00:02:17.964 test-flow-perf: explicitly disabled via build config 00:02:17.964 test-gpudev: explicitly disabled via build config 00:02:17.964 test-mldev: explicitly disabled via build config 00:02:17.964 test-pipeline: explicitly disabled via build config 00:02:17.964 test-pmd: explicitly disabled via build config 00:02:17.964 test-regex: explicitly disabled via build config 00:02:17.964 test-sad: explicitly disabled via build config 00:02:17.964 test-security-perf: explicitly disabled via build config 00:02:17.964 00:02:17.964 libs: 00:02:17.964 argparse: explicitly disabled via build config 00:02:17.964 metrics: explicitly disabled via build config 00:02:17.964 acl: explicitly disabled via build config 00:02:17.964 bbdev: explicitly disabled via build config 00:02:17.964 bitratestats: explicitly disabled via build config 00:02:17.964 bpf: explicitly disabled via build config 00:02:17.964 cfgfile: explicitly disabled via build config 00:02:17.964 distributor: explicitly disabled via build config 00:02:17.964 efd: explicitly disabled via build config 00:02:17.964 eventdev: explicitly disabled via build config 00:02:17.964 dispatcher: explicitly disabled via build config 00:02:17.964 gpudev: explicitly disabled via build config 00:02:17.964 gro: explicitly disabled via build config 00:02:17.964 gso: explicitly disabled via build config 00:02:17.964 ip_frag: explicitly disabled via build config 00:02:17.964 jobstats: explicitly disabled via build config 00:02:17.964 latencystats: explicitly disabled via build config 00:02:17.964 lpm: explicitly disabled via build config 00:02:17.964 member: explicitly disabled via build config 00:02:17.964 pcapng: explicitly disabled via build config 00:02:17.964 rawdev: explicitly disabled via build config 00:02:17.964 regexdev: explicitly disabled via build config 00:02:17.964 mldev: explicitly disabled via build config 00:02:17.964 rib: explicitly disabled via build config 00:02:17.964 sched: explicitly disabled via build config 00:02:17.964 stack: explicitly disabled via build config 00:02:17.965 ipsec: explicitly disabled via build config 00:02:17.965 pdcp: explicitly disabled via build config 00:02:17.965 fib: explicitly disabled via build config 00:02:17.965 port: explicitly disabled via build config 00:02:17.965 pdump: explicitly disabled via build config 00:02:17.965 table: explicitly disabled via build config 00:02:17.965 pipeline: explicitly disabled via build config 00:02:17.965 graph: explicitly disabled via build config 00:02:17.965 node: explicitly disabled via build config 00:02:17.965 00:02:17.965 drivers: 00:02:17.965 common/cpt: not in enabled drivers build config 00:02:17.965 common/dpaax: not in enabled drivers build config 00:02:17.965 common/iavf: not in enabled drivers build config 00:02:17.965 common/idpf: not in enabled drivers build config 00:02:17.965 common/ionic: not in enabled drivers build config 00:02:17.965 common/mvep: not in enabled drivers build config 00:02:17.965 common/octeontx: not in enabled drivers build config 00:02:17.965 bus/auxiliary: not in enabled drivers build config 00:02:17.965 bus/cdx: not in enabled drivers build config 00:02:17.965 bus/dpaa: not in enabled drivers build config 00:02:17.965 bus/fslmc: not in enabled drivers build config 00:02:17.965 bus/ifpga: not in enabled drivers build config 00:02:17.965 bus/platform: not in enabled drivers build config 00:02:17.965 bus/uacce: not in enabled drivers build config 00:02:17.965 bus/vmbus: not in enabled drivers build config 00:02:17.965 common/cnxk: not in enabled drivers build config 00:02:17.965 common/mlx5: not in enabled drivers build config 00:02:17.965 common/nfp: not in enabled drivers build config 00:02:17.965 common/nitrox: not in enabled drivers build config 00:02:17.965 common/qat: not in enabled drivers build config 00:02:17.965 common/sfc_efx: not in enabled drivers build config 00:02:17.965 mempool/bucket: not in enabled drivers build config 00:02:17.965 mempool/cnxk: not in enabled drivers build config 00:02:17.965 mempool/dpaa: not in enabled drivers build config 00:02:17.965 mempool/dpaa2: not in enabled drivers build config 00:02:17.965 mempool/octeontx: not in enabled drivers build config 00:02:17.965 mempool/stack: not in enabled drivers build config 00:02:17.965 dma/cnxk: not in enabled drivers build config 00:02:17.965 dma/dpaa: not in enabled drivers build config 00:02:17.965 dma/dpaa2: not in enabled drivers build config 00:02:17.965 dma/hisilicon: not in enabled drivers build config 00:02:17.965 dma/idxd: not in enabled drivers build config 00:02:17.965 dma/ioat: not in enabled drivers build config 00:02:17.965 dma/skeleton: not in enabled drivers build config 00:02:17.965 net/af_packet: not in enabled drivers build config 00:02:17.965 net/af_xdp: not in enabled drivers build config 00:02:17.965 net/ark: not in enabled drivers build config 00:02:17.965 net/atlantic: not in enabled drivers build config 00:02:17.965 net/avp: not in enabled drivers build config 00:02:17.965 net/axgbe: not in enabled drivers build config 00:02:17.965 net/bnx2x: not in enabled drivers build config 00:02:17.965 net/bnxt: not in enabled drivers build config 00:02:17.965 net/bonding: not in enabled drivers build config 00:02:17.965 net/cnxk: not in enabled drivers build config 00:02:17.965 net/cpfl: not in enabled drivers build config 00:02:17.965 net/cxgbe: not in enabled drivers build config 00:02:17.965 net/dpaa: not in enabled drivers build config 00:02:17.965 net/dpaa2: not in enabled drivers build config 00:02:17.965 net/e1000: not in enabled drivers build config 00:02:17.965 net/ena: not in enabled drivers build config 00:02:17.965 net/enetc: not in enabled drivers build config 00:02:17.965 net/enetfec: not in enabled drivers build config 00:02:17.965 net/enic: not in enabled drivers build config 00:02:17.965 net/failsafe: not in enabled drivers build config 00:02:17.965 net/fm10k: not in enabled drivers build config 00:02:17.965 net/gve: not in enabled drivers build config 00:02:17.965 net/hinic: not in enabled drivers build config 00:02:17.965 net/hns3: not in enabled drivers build config 00:02:17.965 net/i40e: not in enabled drivers build config 00:02:17.965 net/iavf: not in enabled drivers build config 00:02:17.965 net/ice: not in enabled drivers build config 00:02:17.965 net/idpf: not in enabled drivers build config 00:02:17.965 net/igc: not in enabled drivers build config 00:02:17.965 net/ionic: not in enabled drivers build config 00:02:17.965 net/ipn3ke: not in enabled drivers build config 00:02:17.965 net/ixgbe: not in enabled drivers build config 00:02:17.965 net/mana: not in enabled drivers build config 00:02:17.965 net/memif: not in enabled drivers build config 00:02:17.965 net/mlx4: not in enabled drivers build config 00:02:17.965 net/mlx5: not in enabled drivers build config 00:02:17.965 net/mvneta: not in enabled drivers build config 00:02:17.965 net/mvpp2: not in enabled drivers build config 00:02:17.965 net/netvsc: not in enabled drivers build config 00:02:17.965 net/nfb: not in enabled drivers build config 00:02:17.965 net/nfp: not in enabled drivers build config 00:02:17.965 net/ngbe: not in enabled drivers build config 00:02:17.965 net/null: not in enabled drivers build config 00:02:17.965 net/octeontx: not in enabled drivers build config 00:02:17.965 net/octeon_ep: not in enabled drivers build config 00:02:17.965 net/pcap: not in enabled drivers build config 00:02:17.965 net/pfe: not in enabled drivers build config 00:02:17.965 net/qede: not in enabled drivers build config 00:02:17.965 net/ring: not in enabled drivers build config 00:02:17.965 net/sfc: not in enabled drivers build config 00:02:17.965 net/softnic: not in enabled drivers build config 00:02:17.965 net/tap: not in enabled drivers build config 00:02:17.965 net/thunderx: not in enabled drivers build config 00:02:17.965 net/txgbe: not in enabled drivers build config 00:02:17.965 net/vdev_netvsc: not in enabled drivers build config 00:02:17.965 net/vhost: not in enabled drivers build config 00:02:17.965 net/virtio: not in enabled drivers build config 00:02:17.965 net/vmxnet3: not in enabled drivers build config 00:02:17.965 raw/*: missing internal dependency, "rawdev" 00:02:17.965 crypto/armv8: not in enabled drivers build config 00:02:17.965 crypto/bcmfs: not in enabled drivers build config 00:02:17.965 crypto/caam_jr: not in enabled drivers build config 00:02:17.965 crypto/ccp: not in enabled drivers build config 00:02:17.965 crypto/cnxk: not in enabled drivers build config 00:02:17.965 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.965 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.965 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.965 crypto/mlx5: not in enabled drivers build config 00:02:17.965 crypto/mvsam: not in enabled drivers build config 00:02:17.965 crypto/nitrox: not in enabled drivers build config 00:02:17.965 crypto/null: not in enabled drivers build config 00:02:17.965 crypto/octeontx: not in enabled drivers build config 00:02:17.965 crypto/openssl: not in enabled drivers build config 00:02:17.965 crypto/scheduler: not in enabled drivers build config 00:02:17.965 crypto/uadk: not in enabled drivers build config 00:02:17.965 crypto/virtio: not in enabled drivers build config 00:02:17.965 compress/isal: not in enabled drivers build config 00:02:17.965 compress/mlx5: not in enabled drivers build config 00:02:17.965 compress/nitrox: not in enabled drivers build config 00:02:17.965 compress/octeontx: not in enabled drivers build config 00:02:17.965 compress/zlib: not in enabled drivers build config 00:02:17.965 regex/*: missing internal dependency, "regexdev" 00:02:17.965 ml/*: missing internal dependency, "mldev" 00:02:17.965 vdpa/ifc: not in enabled drivers build config 00:02:17.965 vdpa/mlx5: not in enabled drivers build config 00:02:17.965 vdpa/nfp: not in enabled drivers build config 00:02:17.965 vdpa/sfc: not in enabled drivers build config 00:02:17.965 event/*: missing internal dependency, "eventdev" 00:02:17.965 baseband/*: missing internal dependency, "bbdev" 00:02:17.965 gpu/*: missing internal dependency, "gpudev" 00:02:17.965 00:02:17.965 00:02:17.965 Build targets in project: 85 00:02:17.965 00:02:17.965 DPDK 24.03.0 00:02:17.965 00:02:17.965 User defined options 00:02:17.965 buildtype : debug 00:02:17.965 default_library : shared 00:02:17.965 libdir : lib 00:02:17.965 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.965 b_sanitize : address 00:02:17.965 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:17.965 c_link_args : 00:02:17.965 cpu_instruction_set: native 00:02:17.965 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:17.965 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:17.965 enable_docs : false 00:02:17.965 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:17.965 enable_kmods : false 00:02:17.965 max_lcores : 128 00:02:17.965 tests : false 00:02:17.965 00:02:17.965 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.965 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:17.965 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.965 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.965 [3/268] Linking static target lib/librte_kvargs.a 00:02:17.965 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.965 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.965 [6/268] Linking static target lib/librte_log.a 00:02:17.965 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.965 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.965 [9/268] Linking static target lib/librte_telemetry.a 00:02:17.965 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.965 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.965 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.965 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.965 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.965 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.965 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.966 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.226 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.487 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.747 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.747 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.747 [22/268] Linking target lib/librte_log.so.24.1 00:02:18.747 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.747 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.747 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.747 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.007 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.007 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.007 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.007 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:19.007 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.007 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:19.007 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.007 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:19.300 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.300 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:19.300 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.300 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:19.560 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.560 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.560 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.560 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.560 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.560 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.560 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.820 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.078 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.079 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.079 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.079 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.079 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.079 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.337 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.337 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.338 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.338 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.338 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.597 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.597 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.857 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.857 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.857 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.857 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.857 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.857 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.117 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.117 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.376 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.376 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.636 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.636 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.636 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.636 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.636 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.636 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.636 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.636 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.945 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.945 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.945 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.945 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.202 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.202 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.202 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.202 [85/268] Linking static target lib/librte_ring.a 00:02:22.461 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.461 [87/268] Linking static target lib/librte_eal.a 00:02:22.461 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.461 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.461 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.461 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.461 [92/268] Linking static target lib/librte_rcu.a 00:02:22.719 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.719 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.719 [95/268] Linking static target lib/librte_mempool.a 00:02:22.719 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.979 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.979 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.979 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.979 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.239 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.239 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.239 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.239 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.498 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.498 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.498 [107/268] Linking static target lib/librte_meter.a 00:02:23.498 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.498 [109/268] Linking static target lib/librte_mbuf.a 00:02:23.757 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.758 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.758 [112/268] Linking static target lib/librte_net.a 00:02:23.758 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.758 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.758 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.017 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.017 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.277 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.277 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.537 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.537 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.537 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.796 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.796 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.796 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.055 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.055 [127/268] Linking static target lib/librte_pci.a 00:02:25.055 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.055 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.055 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.314 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.314 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:25.314 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.314 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.314 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:25.314 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:25.314 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.314 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:25.314 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.314 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:25.314 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.314 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:25.573 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:25.573 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.573 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.833 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.833 [147/268] Linking static target lib/librte_cmdline.a 00:02:25.833 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:26.092 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.092 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:26.092 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:26.351 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:26.351 [153/268] Linking static target lib/librte_timer.a 00:02:26.351 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:26.351 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:26.610 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:26.610 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:26.869 [158/268] Linking static target lib/librte_hash.a 00:02:26.869 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:26.869 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:26.869 [161/268] Linking static target lib/librte_compressdev.a 00:02:26.869 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.869 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.128 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.128 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.128 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:27.128 [167/268] Linking static target lib/librte_dmadev.a 00:02:27.128 [168/268] Linking static target lib/librte_ethdev.a 00:02:27.128 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.387 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.387 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.387 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:27.646 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:27.646 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.905 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.905 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.905 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.905 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.905 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:28.165 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.165 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.165 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.424 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.424 [184/268] Linking static target lib/librte_cryptodev.a 00:02:28.424 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.424 [186/268] Linking static target lib/librte_power.a 00:02:28.683 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:28.683 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.683 [189/268] Linking static target lib/librte_reorder.a 00:02:28.683 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:28.941 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.941 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.941 [193/268] Linking static target lib/librte_security.a 00:02:29.200 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.200 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.769 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.769 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.769 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:29.769 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:30.029 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.029 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.289 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.290 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.550 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.550 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.550 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.811 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:30.811 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.811 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.811 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.811 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.071 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.071 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.071 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.071 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.071 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.071 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:31.071 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.071 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:31.071 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:31.071 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:31.331 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.331 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.331 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.331 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.331 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:31.591 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.532 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.915 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.915 [230/268] Linking target lib/librte_eal.so.24.1 00:02:34.175 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.175 [232/268] Linking target lib/librte_meter.so.24.1 00:02:34.175 [233/268] Linking target lib/librte_ring.so.24.1 00:02:34.175 [234/268] Linking target lib/librte_pci.so.24.1 00:02:34.175 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.434 [236/268] Linking target lib/librte_timer.so.24.1 00:02:34.434 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:34.434 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.434 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.434 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.434 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:34.434 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.434 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:34.434 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.434 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.695 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:34.695 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:34.695 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:34.695 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:34.695 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:34.955 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:34.955 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:34.955 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:34.955 [254/268] Linking target lib/librte_net.so.24.1 00:02:34.955 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:34.955 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:34.955 [257/268] Linking target lib/librte_security.so.24.1 00:02:34.955 [258/268] Linking target lib/librte_hash.so.24.1 00:02:34.955 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.214 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:36.158 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.158 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:36.418 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.418 [264/268] Linking target lib/librte_power.so.24.1 00:02:36.677 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.937 [266/268] Linking static target lib/librte_vhost.a 00:02:39.473 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.473 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:39.473 INFO: autodetecting backend as ninja 00:02:39.473 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:01.408 CC lib/log/log_flags.o 00:03:01.408 CC lib/log/log.o 00:03:01.408 CC lib/log/log_deprecated.o 00:03:01.408 CC lib/ut/ut.o 00:03:01.408 CC lib/ut_mock/mock.o 00:03:01.408 LIB libspdk_log.a 00:03:01.408 LIB libspdk_ut.a 00:03:01.408 LIB libspdk_ut_mock.a 00:03:01.408 SO libspdk_log.so.7.1 00:03:01.408 SO libspdk_ut.so.2.0 00:03:01.408 SO libspdk_ut_mock.so.6.0 00:03:01.408 SYMLINK libspdk_log.so 00:03:01.408 SYMLINK libspdk_ut.so 00:03:01.408 SYMLINK libspdk_ut_mock.so 00:03:01.408 CXX lib/trace_parser/trace.o 00:03:01.408 CC lib/ioat/ioat.o 00:03:01.408 CC lib/dma/dma.o 00:03:01.408 CC lib/util/base64.o 00:03:01.408 CC lib/util/cpuset.o 00:03:01.408 CC lib/util/bit_array.o 00:03:01.408 CC lib/util/crc16.o 00:03:01.408 CC lib/util/crc32c.o 00:03:01.408 CC lib/util/crc32.o 00:03:01.408 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.408 CC lib/util/crc32_ieee.o 00:03:01.408 CC lib/util/crc64.o 00:03:01.408 CC lib/util/dif.o 00:03:01.408 CC lib/vfio_user/host/vfio_user.o 00:03:01.408 CC lib/util/fd.o 00:03:01.408 CC lib/util/fd_group.o 00:03:01.408 LIB libspdk_dma.a 00:03:01.408 CC lib/util/file.o 00:03:01.408 SO libspdk_dma.so.5.0 00:03:01.408 CC lib/util/hexlify.o 00:03:01.408 LIB libspdk_ioat.a 00:03:01.408 SYMLINK libspdk_dma.so 00:03:01.408 CC lib/util/iov.o 00:03:01.408 SO libspdk_ioat.so.7.0 00:03:01.408 CC lib/util/math.o 00:03:01.408 CC lib/util/net.o 00:03:01.408 SYMLINK libspdk_ioat.so 00:03:01.408 CC lib/util/pipe.o 00:03:01.408 LIB libspdk_vfio_user.a 00:03:01.408 CC lib/util/strerror_tls.o 00:03:01.408 SO libspdk_vfio_user.so.5.0 00:03:01.408 CC lib/util/string.o 00:03:01.408 SYMLINK libspdk_vfio_user.so 00:03:01.408 CC lib/util/uuid.o 00:03:01.408 CC lib/util/xor.o 00:03:01.408 CC lib/util/zipf.o 00:03:01.408 CC lib/util/md5.o 00:03:01.714 LIB libspdk_util.a 00:03:01.714 SO libspdk_util.so.10.1 00:03:01.714 LIB libspdk_trace_parser.a 00:03:01.714 SO libspdk_trace_parser.so.6.0 00:03:01.973 SYMLINK libspdk_util.so 00:03:01.973 SYMLINK libspdk_trace_parser.so 00:03:01.973 CC lib/conf/conf.o 00:03:01.973 CC lib/json/json_parse.o 00:03:01.973 CC lib/vmd/vmd.o 00:03:01.973 CC lib/json/json_util.o 00:03:01.973 CC lib/json/json_write.o 00:03:01.973 CC lib/vmd/led.o 00:03:01.973 CC lib/rdma_utils/rdma_utils.o 00:03:01.973 CC lib/env_dpdk/env.o 00:03:01.973 CC lib/env_dpdk/memory.o 00:03:01.973 CC lib/idxd/idxd.o 00:03:02.237 CC lib/idxd/idxd_user.o 00:03:02.237 CC lib/idxd/idxd_kernel.o 00:03:02.237 CC lib/env_dpdk/pci.o 00:03:02.585 LIB libspdk_conf.a 00:03:02.585 LIB libspdk_rdma_utils.a 00:03:02.585 SO libspdk_conf.so.6.0 00:03:02.585 LIB libspdk_json.a 00:03:02.585 SO libspdk_rdma_utils.so.1.0 00:03:02.585 SYMLINK libspdk_conf.so 00:03:02.585 SO libspdk_json.so.6.0 00:03:02.585 CC lib/env_dpdk/init.o 00:03:02.585 SYMLINK libspdk_rdma_utils.so 00:03:02.585 CC lib/env_dpdk/threads.o 00:03:02.585 CC lib/env_dpdk/pci_ioat.o 00:03:02.585 CC lib/env_dpdk/pci_virtio.o 00:03:02.585 SYMLINK libspdk_json.so 00:03:02.585 CC lib/env_dpdk/pci_vmd.o 00:03:02.585 CC lib/env_dpdk/pci_idxd.o 00:03:02.585 CC lib/env_dpdk/pci_event.o 00:03:02.585 CC lib/env_dpdk/sigbus_handler.o 00:03:02.585 CC lib/env_dpdk/pci_dpdk.o 00:03:02.843 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.843 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.843 LIB libspdk_idxd.a 00:03:02.843 SO libspdk_idxd.so.12.1 00:03:03.102 LIB libspdk_vmd.a 00:03:03.102 SYMLINK libspdk_idxd.so 00:03:03.102 SO libspdk_vmd.so.6.0 00:03:03.102 CC lib/rdma_provider/common.o 00:03:03.102 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.102 SYMLINK libspdk_vmd.so 00:03:03.102 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.102 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.102 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.102 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.359 LIB libspdk_rdma_provider.a 00:03:03.359 SO libspdk_rdma_provider.so.7.0 00:03:03.618 SYMLINK libspdk_rdma_provider.so 00:03:03.618 LIB libspdk_jsonrpc.a 00:03:03.618 SO libspdk_jsonrpc.so.6.0 00:03:03.618 SYMLINK libspdk_jsonrpc.so 00:03:04.228 LIB libspdk_env_dpdk.a 00:03:04.228 CC lib/rpc/rpc.o 00:03:04.228 SO libspdk_env_dpdk.so.15.1 00:03:04.228 SYMLINK libspdk_env_dpdk.so 00:03:04.486 LIB libspdk_rpc.a 00:03:04.486 SO libspdk_rpc.so.6.0 00:03:04.486 SYMLINK libspdk_rpc.so 00:03:05.053 CC lib/notify/notify.o 00:03:05.053 CC lib/notify/notify_rpc.o 00:03:05.053 CC lib/keyring/keyring_rpc.o 00:03:05.053 CC lib/keyring/keyring.o 00:03:05.053 CC lib/trace/trace_flags.o 00:03:05.053 CC lib/trace/trace.o 00:03:05.053 CC lib/trace/trace_rpc.o 00:03:05.053 LIB libspdk_notify.a 00:03:05.053 SO libspdk_notify.so.6.0 00:03:05.313 LIB libspdk_keyring.a 00:03:05.313 LIB libspdk_trace.a 00:03:05.313 SYMLINK libspdk_notify.so 00:03:05.313 SO libspdk_keyring.so.2.0 00:03:05.313 SO libspdk_trace.so.11.0 00:03:05.313 SYMLINK libspdk_keyring.so 00:03:05.313 SYMLINK libspdk_trace.so 00:03:05.882 CC lib/thread/thread.o 00:03:05.882 CC lib/thread/iobuf.o 00:03:05.882 CC lib/sock/sock_rpc.o 00:03:05.882 CC lib/sock/sock.o 00:03:06.451 LIB libspdk_sock.a 00:03:06.451 SO libspdk_sock.so.10.0 00:03:06.451 SYMLINK libspdk_sock.so 00:03:07.019 CC lib/nvme/nvme_ctrlr.o 00:03:07.020 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:07.020 CC lib/nvme/nvme_fabric.o 00:03:07.020 CC lib/nvme/nvme_ns_cmd.o 00:03:07.020 CC lib/nvme/nvme_ns.o 00:03:07.020 CC lib/nvme/nvme_pcie.o 00:03:07.020 CC lib/nvme/nvme_pcie_common.o 00:03:07.020 CC lib/nvme/nvme.o 00:03:07.020 CC lib/nvme/nvme_qpair.o 00:03:07.586 CC lib/nvme/nvme_quirks.o 00:03:07.846 CC lib/nvme/nvme_transport.o 00:03:07.846 CC lib/nvme/nvme_discovery.o 00:03:07.846 LIB libspdk_thread.a 00:03:07.846 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:07.846 SO libspdk_thread.so.11.0 00:03:07.846 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:07.846 SYMLINK libspdk_thread.so 00:03:07.846 CC lib/nvme/nvme_tcp.o 00:03:08.105 CC lib/nvme/nvme_opal.o 00:03:08.105 CC lib/nvme/nvme_io_msg.o 00:03:08.105 CC lib/nvme/nvme_poll_group.o 00:03:08.364 CC lib/nvme/nvme_zns.o 00:03:08.364 CC lib/nvme/nvme_stubs.o 00:03:08.364 CC lib/nvme/nvme_auth.o 00:03:08.623 CC lib/accel/accel.o 00:03:08.881 CC lib/blob/blobstore.o 00:03:08.881 CC lib/init/json_config.o 00:03:08.881 CC lib/blob/request.o 00:03:08.881 CC lib/virtio/virtio.o 00:03:08.881 CC lib/blob/zeroes.o 00:03:09.140 CC lib/blob/blob_bs_dev.o 00:03:09.140 CC lib/init/subsystem.o 00:03:09.400 CC lib/nvme/nvme_cuse.o 00:03:09.400 CC lib/virtio/virtio_vhost_user.o 00:03:09.400 CC lib/init/subsystem_rpc.o 00:03:09.400 CC lib/fsdev/fsdev.o 00:03:09.400 CC lib/fsdev/fsdev_io.o 00:03:09.659 CC lib/init/rpc.o 00:03:09.659 CC lib/fsdev/fsdev_rpc.o 00:03:09.659 LIB libspdk_init.a 00:03:09.659 CC lib/nvme/nvme_rdma.o 00:03:09.659 SO libspdk_init.so.6.0 00:03:09.918 CC lib/virtio/virtio_vfio_user.o 00:03:09.918 CC lib/virtio/virtio_pci.o 00:03:09.918 SYMLINK libspdk_init.so 00:03:09.918 CC lib/accel/accel_rpc.o 00:03:09.918 CC lib/accel/accel_sw.o 00:03:10.176 CC lib/event/app.o 00:03:10.176 CC lib/event/reactor.o 00:03:10.176 CC lib/event/log_rpc.o 00:03:10.176 LIB libspdk_virtio.a 00:03:10.176 CC lib/event/app_rpc.o 00:03:10.176 SO libspdk_virtio.so.7.0 00:03:10.176 CC lib/event/scheduler_static.o 00:03:10.434 SYMLINK libspdk_virtio.so 00:03:10.434 LIB libspdk_accel.a 00:03:10.434 SO libspdk_accel.so.16.0 00:03:10.434 LIB libspdk_fsdev.a 00:03:10.692 SYMLINK libspdk_accel.so 00:03:10.692 SO libspdk_fsdev.so.2.0 00:03:10.692 LIB libspdk_event.a 00:03:10.692 SYMLINK libspdk_fsdev.so 00:03:10.692 SO libspdk_event.so.14.0 00:03:10.950 SYMLINK libspdk_event.so 00:03:10.950 CC lib/bdev/bdev.o 00:03:10.950 CC lib/bdev/part.o 00:03:10.950 CC lib/bdev/bdev_zone.o 00:03:10.950 CC lib/bdev/bdev_rpc.o 00:03:10.950 CC lib/bdev/scsi_nvme.o 00:03:10.950 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:11.517 LIB libspdk_nvme.a 00:03:11.784 SO libspdk_nvme.so.15.0 00:03:12.071 SYMLINK libspdk_nvme.so 00:03:12.071 LIB libspdk_fuse_dispatcher.a 00:03:12.071 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.071 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.446 LIB libspdk_blob.a 00:03:13.446 SO libspdk_blob.so.11.0 00:03:13.446 SYMLINK libspdk_blob.so 00:03:13.705 CC lib/blobfs/blobfs.o 00:03:13.705 CC lib/blobfs/tree.o 00:03:13.963 CC lib/lvol/lvol.o 00:03:14.587 LIB libspdk_bdev.a 00:03:14.860 SO libspdk_bdev.so.17.0 00:03:14.860 SYMLINK libspdk_bdev.so 00:03:14.860 LIB libspdk_blobfs.a 00:03:15.120 SO libspdk_blobfs.so.10.0 00:03:15.120 SYMLINK libspdk_blobfs.so 00:03:15.120 CC lib/ftl/ftl_init.o 00:03:15.120 CC lib/ftl/ftl_core.o 00:03:15.120 CC lib/ftl/ftl_layout.o 00:03:15.120 CC lib/ftl/ftl_debug.o 00:03:15.120 CC lib/ftl/ftl_io.o 00:03:15.120 CC lib/scsi/dev.o 00:03:15.120 CC lib/nvmf/ctrlr.o 00:03:15.120 CC lib/nbd/nbd.o 00:03:15.120 CC lib/ublk/ublk.o 00:03:15.120 LIB libspdk_lvol.a 00:03:15.379 SO libspdk_lvol.so.10.0 00:03:15.379 SYMLINK libspdk_lvol.so 00:03:15.379 CC lib/ublk/ublk_rpc.o 00:03:15.379 CC lib/ftl/ftl_sb.o 00:03:15.379 CC lib/ftl/ftl_l2p.o 00:03:15.379 CC lib/scsi/lun.o 00:03:15.379 CC lib/ftl/ftl_l2p_flat.o 00:03:15.379 CC lib/ftl/ftl_nv_cache.o 00:03:15.638 CC lib/nbd/nbd_rpc.o 00:03:15.638 CC lib/scsi/port.o 00:03:15.638 CC lib/ftl/ftl_band.o 00:03:15.638 CC lib/ftl/ftl_band_ops.o 00:03:15.638 CC lib/nvmf/ctrlr_discovery.o 00:03:15.638 CC lib/nvmf/ctrlr_bdev.o 00:03:15.638 LIB libspdk_nbd.a 00:03:15.638 SO libspdk_nbd.so.7.0 00:03:15.897 CC lib/nvmf/subsystem.o 00:03:15.897 SYMLINK libspdk_nbd.so 00:03:15.897 CC lib/scsi/scsi.o 00:03:15.897 CC lib/scsi/scsi_bdev.o 00:03:15.897 CC lib/scsi/scsi_pr.o 00:03:15.897 LIB libspdk_ublk.a 00:03:16.156 SO libspdk_ublk.so.3.0 00:03:16.156 SYMLINK libspdk_ublk.so 00:03:16.156 CC lib/scsi/scsi_rpc.o 00:03:16.156 CC lib/scsi/task.o 00:03:16.156 CC lib/ftl/ftl_writer.o 00:03:16.156 CC lib/nvmf/nvmf.o 00:03:16.415 CC lib/ftl/ftl_rq.o 00:03:16.416 CC lib/nvmf/nvmf_rpc.o 00:03:16.416 LIB libspdk_scsi.a 00:03:16.416 CC lib/nvmf/transport.o 00:03:16.416 SO libspdk_scsi.so.9.0 00:03:16.675 CC lib/nvmf/tcp.o 00:03:16.675 CC lib/ftl/ftl_reloc.o 00:03:16.675 SYMLINK libspdk_scsi.so 00:03:16.675 CC lib/nvmf/stubs.o 00:03:16.675 CC lib/nvmf/mdns_server.o 00:03:16.933 CC lib/ftl/ftl_l2p_cache.o 00:03:17.192 CC lib/ftl/ftl_p2l.o 00:03:17.192 CC lib/nvmf/rdma.o 00:03:17.451 CC lib/nvmf/auth.o 00:03:17.451 CC lib/ftl/ftl_p2l_log.o 00:03:17.451 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.709 CC lib/iscsi/conn.o 00:03:17.709 CC lib/vhost/vhost.o 00:03:17.709 CC lib/vhost/vhost_rpc.o 00:03:17.709 CC lib/iscsi/init_grp.o 00:03:17.709 CC lib/vhost/vhost_scsi.o 00:03:17.967 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:17.967 CC lib/vhost/vhost_blk.o 00:03:17.967 CC lib/iscsi/iscsi.o 00:03:18.226 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.226 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.483 CC lib/vhost/rte_vhost_user.o 00:03:18.483 CC lib/iscsi/param.o 00:03:18.483 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.483 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.741 CC lib/iscsi/portal_grp.o 00:03:18.741 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.741 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.741 CC lib/iscsi/tgt_node.o 00:03:18.998 CC lib/iscsi/iscsi_subsystem.o 00:03:18.999 CC lib/iscsi/iscsi_rpc.o 00:03:18.999 CC lib/iscsi/task.o 00:03:18.999 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.999 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.255 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.255 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.255 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:19.512 CC lib/ftl/utils/ftl_conf.o 00:03:19.512 CC lib/ftl/utils/ftl_md.o 00:03:19.512 CC lib/ftl/utils/ftl_mempool.o 00:03:19.512 CC lib/ftl/utils/ftl_bitmap.o 00:03:19.512 CC lib/ftl/utils/ftl_property.o 00:03:19.512 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.770 LIB libspdk_vhost.a 00:03:19.770 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.770 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.770 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.770 SO libspdk_vhost.so.8.0 00:03:19.770 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.770 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.770 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:20.029 SYMLINK libspdk_vhost.so 00:03:20.029 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:20.029 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:20.029 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:20.029 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:20.029 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:20.029 LIB libspdk_nvmf.a 00:03:20.029 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:20.029 CC lib/ftl/base/ftl_base_dev.o 00:03:20.029 CC lib/ftl/base/ftl_base_bdev.o 00:03:20.288 LIB libspdk_iscsi.a 00:03:20.288 CC lib/ftl/ftl_trace.o 00:03:20.288 SO libspdk_nvmf.so.20.0 00:03:20.288 SO libspdk_iscsi.so.8.0 00:03:20.547 LIB libspdk_ftl.a 00:03:20.547 SYMLINK libspdk_iscsi.so 00:03:20.547 SYMLINK libspdk_nvmf.so 00:03:20.806 SO libspdk_ftl.so.9.0 00:03:21.066 SYMLINK libspdk_ftl.so 00:03:21.325 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.585 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.585 CC module/keyring/linux/keyring.o 00:03:21.585 CC module/fsdev/aio/fsdev_aio.o 00:03:21.585 CC module/sock/posix/posix.o 00:03:21.585 CC module/keyring/file/keyring.o 00:03:21.585 CC module/blob/bdev/blob_bdev.o 00:03:21.585 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.585 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.585 CC module/accel/error/accel_error.o 00:03:21.585 LIB libspdk_env_dpdk_rpc.a 00:03:21.585 SO libspdk_env_dpdk_rpc.so.6.0 00:03:21.585 SYMLINK libspdk_env_dpdk_rpc.so 00:03:21.585 CC module/keyring/file/keyring_rpc.o 00:03:21.845 CC module/keyring/linux/keyring_rpc.o 00:03:21.845 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.845 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:21.845 LIB libspdk_scheduler_gscheduler.a 00:03:21.845 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.845 CC module/accel/error/accel_error_rpc.o 00:03:21.845 SO libspdk_scheduler_gscheduler.so.4.0 00:03:21.845 LIB libspdk_keyring_file.a 00:03:21.845 LIB libspdk_keyring_linux.a 00:03:21.845 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:21.845 LIB libspdk_blob_bdev.a 00:03:21.845 LIB libspdk_scheduler_dynamic.a 00:03:21.845 SO libspdk_keyring_file.so.2.0 00:03:21.845 SO libspdk_keyring_linux.so.1.0 00:03:21.845 SO libspdk_blob_bdev.so.11.0 00:03:21.845 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.845 SO libspdk_scheduler_dynamic.so.4.0 00:03:22.109 CC module/fsdev/aio/linux_aio_mgr.o 00:03:22.109 SYMLINK libspdk_keyring_file.so 00:03:22.109 SYMLINK libspdk_keyring_linux.so 00:03:22.109 SYMLINK libspdk_blob_bdev.so 00:03:22.109 LIB libspdk_accel_error.a 00:03:22.109 SO libspdk_accel_error.so.2.0 00:03:22.109 SYMLINK libspdk_scheduler_dynamic.so 00:03:22.109 CC module/accel/ioat/accel_ioat.o 00:03:22.109 SYMLINK libspdk_accel_error.so 00:03:22.109 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.110 CC module/accel/dsa/accel_dsa.o 00:03:22.110 CC module/accel/iaa/accel_iaa.o 00:03:22.375 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.375 CC module/accel/dsa/accel_dsa_rpc.o 00:03:22.375 CC module/bdev/delay/vbdev_delay.o 00:03:22.375 LIB libspdk_accel_ioat.a 00:03:22.375 CC module/bdev/error/vbdev_error.o 00:03:22.375 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.375 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.375 SO libspdk_accel_ioat.so.6.0 00:03:22.375 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.375 LIB libspdk_accel_iaa.a 00:03:22.375 SYMLINK libspdk_accel_ioat.so 00:03:22.375 LIB libspdk_fsdev_aio.a 00:03:22.634 SO libspdk_accel_iaa.so.3.0 00:03:22.634 LIB libspdk_accel_dsa.a 00:03:22.634 LIB libspdk_sock_posix.a 00:03:22.634 SO libspdk_fsdev_aio.so.1.0 00:03:22.634 SO libspdk_accel_dsa.so.5.0 00:03:22.634 SO libspdk_sock_posix.so.6.0 00:03:22.634 SYMLINK libspdk_accel_iaa.so 00:03:22.634 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.634 SYMLINK libspdk_fsdev_aio.so 00:03:22.634 SYMLINK libspdk_accel_dsa.so 00:03:22.634 CC module/bdev/gpt/gpt.o 00:03:22.634 LIB libspdk_bdev_error.a 00:03:22.634 SYMLINK libspdk_sock_posix.so 00:03:22.634 SO libspdk_bdev_error.so.6.0 00:03:22.893 LIB libspdk_blobfs_bdev.a 00:03:22.893 CC module/bdev/null/bdev_null.o 00:03:22.893 CC module/bdev/lvol/vbdev_lvol.o 00:03:22.893 CC module/bdev/malloc/bdev_malloc.o 00:03:22.893 SYMLINK libspdk_bdev_error.so 00:03:22.893 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.893 CC module/bdev/nvme/bdev_nvme.o 00:03:22.893 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.893 SO libspdk_blobfs_bdev.so.6.0 00:03:22.893 LIB libspdk_bdev_delay.a 00:03:22.893 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.893 SO libspdk_bdev_delay.so.6.0 00:03:22.893 CC module/bdev/raid/bdev_raid.o 00:03:22.893 SYMLINK libspdk_blobfs_bdev.so 00:03:22.893 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.893 SYMLINK libspdk_bdev_delay.so 00:03:22.893 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.175 LIB libspdk_bdev_gpt.a 00:03:23.175 CC module/bdev/raid/bdev_raid_sb.o 00:03:23.175 CC module/bdev/null/bdev_null_rpc.o 00:03:23.175 SO libspdk_bdev_gpt.so.6.0 00:03:23.175 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.433 SYMLINK libspdk_bdev_gpt.so 00:03:23.433 LIB libspdk_bdev_passthru.a 00:03:23.433 LIB libspdk_bdev_malloc.a 00:03:23.433 SO libspdk_bdev_passthru.so.6.0 00:03:23.433 LIB libspdk_bdev_null.a 00:03:23.433 SO libspdk_bdev_malloc.so.6.0 00:03:23.433 SO libspdk_bdev_null.so.6.0 00:03:23.433 SYMLINK libspdk_bdev_passthru.so 00:03:23.433 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.433 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.433 SYMLINK libspdk_bdev_malloc.so 00:03:23.433 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:23.433 CC module/bdev/raid/raid0.o 00:03:23.433 SYMLINK libspdk_bdev_null.so 00:03:23.433 CC module/bdev/raid/raid1.o 00:03:23.692 CC module/bdev/split/vbdev_split.o 00:03:23.692 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.692 CC module/bdev/aio/bdev_aio.o 00:03:23.692 LIB libspdk_bdev_lvol.a 00:03:23.950 SO libspdk_bdev_lvol.so.6.0 00:03:23.950 CC module/bdev/nvme/nvme_rpc.o 00:03:23.950 CC module/bdev/nvme/bdev_mdns_client.o 00:03:23.950 CC module/bdev/nvme/vbdev_opal.o 00:03:23.950 SYMLINK libspdk_bdev_lvol.so 00:03:23.950 LIB libspdk_bdev_split.a 00:03:23.950 SO libspdk_bdev_split.so.6.0 00:03:23.950 LIB libspdk_bdev_zone_block.a 00:03:23.950 SYMLINK libspdk_bdev_split.so 00:03:23.950 SO libspdk_bdev_zone_block.so.6.0 00:03:24.208 CC module/bdev/ftl/bdev_ftl.o 00:03:24.208 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.208 SYMLINK libspdk_bdev_zone_block.so 00:03:24.208 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.208 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.208 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.208 CC module/bdev/raid/concat.o 00:03:24.208 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.208 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.208 CC module/bdev/raid/raid5f.o 00:03:24.467 LIB libspdk_bdev_aio.a 00:03:24.467 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.467 SO libspdk_bdev_aio.so.6.0 00:03:24.467 LIB libspdk_bdev_ftl.a 00:03:24.467 SYMLINK libspdk_bdev_aio.so 00:03:24.467 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.467 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.467 SO libspdk_bdev_ftl.so.6.0 00:03:24.467 SYMLINK libspdk_bdev_ftl.so 00:03:24.725 LIB libspdk_bdev_iscsi.a 00:03:24.725 SO libspdk_bdev_iscsi.so.6.0 00:03:24.984 SYMLINK libspdk_bdev_iscsi.so 00:03:24.984 LIB libspdk_bdev_raid.a 00:03:24.984 LIB libspdk_bdev_virtio.a 00:03:24.984 SO libspdk_bdev_raid.so.6.0 00:03:24.984 SO libspdk_bdev_virtio.so.6.0 00:03:25.245 SYMLINK libspdk_bdev_raid.so 00:03:25.245 SYMLINK libspdk_bdev_virtio.so 00:03:26.625 LIB libspdk_bdev_nvme.a 00:03:26.625 SO libspdk_bdev_nvme.so.7.1 00:03:26.625 SYMLINK libspdk_bdev_nvme.so 00:03:27.192 CC module/event/subsystems/sock/sock.o 00:03:27.192 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:27.192 CC module/event/subsystems/keyring/keyring.o 00:03:27.192 CC module/event/subsystems/scheduler/scheduler.o 00:03:27.192 CC module/event/subsystems/fsdev/fsdev.o 00:03:27.192 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:27.192 CC module/event/subsystems/vmd/vmd.o 00:03:27.192 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:27.192 CC module/event/subsystems/iobuf/iobuf.o 00:03:27.451 LIB libspdk_event_keyring.a 00:03:27.451 LIB libspdk_event_fsdev.a 00:03:27.451 LIB libspdk_event_vhost_blk.a 00:03:27.451 LIB libspdk_event_scheduler.a 00:03:27.451 LIB libspdk_event_vmd.a 00:03:27.451 SO libspdk_event_keyring.so.1.0 00:03:27.451 SO libspdk_event_fsdev.so.1.0 00:03:27.451 SO libspdk_event_vhost_blk.so.3.0 00:03:27.451 SO libspdk_event_scheduler.so.4.0 00:03:27.451 LIB libspdk_event_iobuf.a 00:03:27.451 LIB libspdk_event_sock.a 00:03:27.451 SO libspdk_event_vmd.so.6.0 00:03:27.451 SYMLINK libspdk_event_keyring.so 00:03:27.451 SO libspdk_event_sock.so.5.0 00:03:27.451 SYMLINK libspdk_event_vhost_blk.so 00:03:27.451 SYMLINK libspdk_event_fsdev.so 00:03:27.451 SO libspdk_event_iobuf.so.3.0 00:03:27.451 SYMLINK libspdk_event_scheduler.so 00:03:27.451 SYMLINK libspdk_event_sock.so 00:03:27.451 SYMLINK libspdk_event_vmd.so 00:03:27.710 SYMLINK libspdk_event_iobuf.so 00:03:27.969 CC module/event/subsystems/accel/accel.o 00:03:28.229 LIB libspdk_event_accel.a 00:03:28.229 SO libspdk_event_accel.so.6.0 00:03:28.229 SYMLINK libspdk_event_accel.so 00:03:28.796 CC module/event/subsystems/bdev/bdev.o 00:03:29.055 LIB libspdk_event_bdev.a 00:03:29.055 SO libspdk_event_bdev.so.6.0 00:03:29.055 SYMLINK libspdk_event_bdev.so 00:03:29.314 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.315 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.315 CC module/event/subsystems/scsi/scsi.o 00:03:29.315 CC module/event/subsystems/nbd/nbd.o 00:03:29.315 CC module/event/subsystems/ublk/ublk.o 00:03:29.574 LIB libspdk_event_nbd.a 00:03:29.574 LIB libspdk_event_scsi.a 00:03:29.574 LIB libspdk_event_ublk.a 00:03:29.574 SO libspdk_event_scsi.so.6.0 00:03:29.574 SO libspdk_event_nbd.so.6.0 00:03:29.574 SO libspdk_event_ublk.so.3.0 00:03:29.574 LIB libspdk_event_nvmf.a 00:03:29.574 SYMLINK libspdk_event_nbd.so 00:03:29.574 SYMLINK libspdk_event_scsi.so 00:03:29.574 SYMLINK libspdk_event_ublk.so 00:03:29.834 SO libspdk_event_nvmf.so.6.0 00:03:29.834 SYMLINK libspdk_event_nvmf.so 00:03:30.094 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.094 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.353 LIB libspdk_event_vhost_scsi.a 00:03:30.353 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.353 LIB libspdk_event_iscsi.a 00:03:30.353 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.353 SO libspdk_event_iscsi.so.6.0 00:03:30.353 SYMLINK libspdk_event_iscsi.so 00:03:30.614 SO libspdk.so.6.0 00:03:30.614 SYMLINK libspdk.so 00:03:31.229 CXX app/trace/trace.o 00:03:31.229 CC app/trace_record/trace_record.o 00:03:31.229 CC app/spdk_lspci/spdk_lspci.o 00:03:31.229 CC app/nvmf_tgt/nvmf_main.o 00:03:31.229 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.229 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.229 CC test/thread/poller_perf/poller_perf.o 00:03:31.229 CC examples/ioat/perf/perf.o 00:03:31.229 CC app/spdk_tgt/spdk_tgt.o 00:03:31.229 CC examples/util/zipf/zipf.o 00:03:31.229 LINK spdk_lspci 00:03:31.229 LINK nvmf_tgt 00:03:31.229 LINK iscsi_tgt 00:03:31.229 LINK interrupt_tgt 00:03:31.229 LINK spdk_trace_record 00:03:31.493 LINK poller_perf 00:03:31.493 LINK zipf 00:03:31.493 LINK spdk_tgt 00:03:31.493 LINK ioat_perf 00:03:31.493 CC app/spdk_nvme_perf/perf.o 00:03:31.493 LINK spdk_trace 00:03:31.753 TEST_HEADER include/spdk/accel.h 00:03:31.753 TEST_HEADER include/spdk/accel_module.h 00:03:31.753 TEST_HEADER include/spdk/assert.h 00:03:31.753 TEST_HEADER include/spdk/barrier.h 00:03:31.753 CC examples/ioat/verify/verify.o 00:03:31.753 TEST_HEADER include/spdk/base64.h 00:03:31.753 TEST_HEADER include/spdk/bdev.h 00:03:31.753 TEST_HEADER include/spdk/bdev_module.h 00:03:31.753 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.753 TEST_HEADER include/spdk/bit_array.h 00:03:31.753 TEST_HEADER include/spdk/bit_pool.h 00:03:31.753 CC app/spdk_nvme_identify/identify.o 00:03:31.753 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.753 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.753 TEST_HEADER include/spdk/blobfs.h 00:03:31.753 TEST_HEADER include/spdk/blob.h 00:03:31.753 TEST_HEADER include/spdk/conf.h 00:03:31.753 TEST_HEADER include/spdk/config.h 00:03:31.753 TEST_HEADER include/spdk/cpuset.h 00:03:31.753 TEST_HEADER include/spdk/crc16.h 00:03:31.753 TEST_HEADER include/spdk/crc32.h 00:03:31.753 TEST_HEADER include/spdk/crc64.h 00:03:31.753 TEST_HEADER include/spdk/dif.h 00:03:31.753 TEST_HEADER include/spdk/dma.h 00:03:31.753 TEST_HEADER include/spdk/endian.h 00:03:31.753 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.753 TEST_HEADER include/spdk/env.h 00:03:31.753 TEST_HEADER include/spdk/event.h 00:03:31.753 TEST_HEADER include/spdk/fd_group.h 00:03:31.753 TEST_HEADER include/spdk/fd.h 00:03:31.753 TEST_HEADER include/spdk/file.h 00:03:31.753 TEST_HEADER include/spdk/fsdev.h 00:03:31.753 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.753 TEST_HEADER include/spdk/ftl.h 00:03:31.753 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.753 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.753 TEST_HEADER include/spdk/hexlify.h 00:03:31.753 TEST_HEADER include/spdk/histogram_data.h 00:03:31.753 CC test/app/histogram_perf/histogram_perf.o 00:03:31.753 TEST_HEADER include/spdk/idxd.h 00:03:31.753 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.753 TEST_HEADER include/spdk/init.h 00:03:31.753 TEST_HEADER include/spdk/ioat.h 00:03:31.753 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.753 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.753 TEST_HEADER include/spdk/json.h 00:03:31.753 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.753 TEST_HEADER include/spdk/keyring.h 00:03:31.753 TEST_HEADER include/spdk/keyring_module.h 00:03:31.753 TEST_HEADER include/spdk/likely.h 00:03:31.753 TEST_HEADER include/spdk/log.h 00:03:31.753 TEST_HEADER include/spdk/lvol.h 00:03:31.753 TEST_HEADER include/spdk/md5.h 00:03:31.753 TEST_HEADER include/spdk/memory.h 00:03:31.753 TEST_HEADER include/spdk/mmio.h 00:03:31.753 CC test/app/bdev_svc/bdev_svc.o 00:03:31.753 TEST_HEADER include/spdk/nbd.h 00:03:31.753 TEST_HEADER include/spdk/net.h 00:03:31.753 TEST_HEADER include/spdk/notify.h 00:03:31.753 TEST_HEADER include/spdk/nvme.h 00:03:31.753 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.753 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.753 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.753 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.753 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.753 CC test/dma/test_dma/test_dma.o 00:03:31.753 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.753 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.753 TEST_HEADER include/spdk/nvmf.h 00:03:31.753 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.753 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.753 TEST_HEADER include/spdk/opal.h 00:03:31.753 TEST_HEADER include/spdk/opal_spec.h 00:03:31.753 TEST_HEADER include/spdk/pci_ids.h 00:03:31.753 TEST_HEADER include/spdk/pipe.h 00:03:31.753 TEST_HEADER include/spdk/queue.h 00:03:31.753 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.753 TEST_HEADER include/spdk/reduce.h 00:03:31.753 TEST_HEADER include/spdk/rpc.h 00:03:31.753 TEST_HEADER include/spdk/scheduler.h 00:03:31.753 TEST_HEADER include/spdk/scsi.h 00:03:31.753 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.753 TEST_HEADER include/spdk/sock.h 00:03:31.753 TEST_HEADER include/spdk/stdinc.h 00:03:31.753 TEST_HEADER include/spdk/string.h 00:03:31.753 TEST_HEADER include/spdk/thread.h 00:03:31.753 TEST_HEADER include/spdk/trace.h 00:03:31.753 TEST_HEADER include/spdk/trace_parser.h 00:03:31.753 CC app/spdk_nvme_discover/discovery_aer.o 00:03:31.753 TEST_HEADER include/spdk/tree.h 00:03:31.753 TEST_HEADER include/spdk/ublk.h 00:03:31.753 TEST_HEADER include/spdk/util.h 00:03:31.753 TEST_HEADER include/spdk/uuid.h 00:03:31.753 TEST_HEADER include/spdk/version.h 00:03:31.753 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.753 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.753 TEST_HEADER include/spdk/vhost.h 00:03:31.753 TEST_HEADER include/spdk/vmd.h 00:03:31.753 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:31.753 TEST_HEADER include/spdk/xor.h 00:03:31.753 TEST_HEADER include/spdk/zipf.h 00:03:31.753 CXX test/cpp_headers/accel.o 00:03:32.012 LINK verify 00:03:32.012 LINK histogram_perf 00:03:32.012 LINK bdev_svc 00:03:32.012 CXX test/cpp_headers/accel_module.o 00:03:32.012 LINK spdk_nvme_discover 00:03:32.269 CXX test/cpp_headers/assert.o 00:03:32.269 CC examples/thread/thread/thread_ex.o 00:03:32.269 CC examples/sock/hello_world/hello_sock.o 00:03:32.269 LINK nvme_fuzz 00:03:32.269 LINK test_dma 00:03:32.527 CXX test/cpp_headers/barrier.o 00:03:32.527 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.527 LINK mem_callbacks 00:03:32.527 CC examples/idxd/perf/perf.o 00:03:32.527 LINK lsvmd 00:03:32.527 CXX test/cpp_headers/base64.o 00:03:32.527 LINK thread 00:03:32.527 LINK spdk_nvme_perf 00:03:32.527 LINK hello_sock 00:03:32.527 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:32.787 CC test/env/vtophys/vtophys.o 00:03:32.787 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:32.787 CXX test/cpp_headers/bdev.o 00:03:32.787 LINK spdk_nvme_identify 00:03:32.787 LINK vtophys 00:03:32.787 LINK idxd_perf 00:03:32.787 CC examples/vmd/led/led.o 00:03:32.787 LINK env_dpdk_post_init 00:03:32.787 CC test/rpc_client/rpc_client_test.o 00:03:33.046 CC test/env/pci/pci_ut.o 00:03:33.046 CC test/env/memory/memory_ut.o 00:03:33.046 CXX test/cpp_headers/bdev_module.o 00:03:33.046 CXX test/cpp_headers/bdev_zone.o 00:03:33.046 LINK led 00:03:33.046 CC app/spdk_top/spdk_top.o 00:03:33.046 LINK rpc_client_test 00:03:33.305 CC app/vhost/vhost.o 00:03:33.305 CXX test/cpp_headers/bit_array.o 00:03:33.305 CXX test/cpp_headers/bit_pool.o 00:03:33.305 CC examples/accel/perf/accel_perf.o 00:03:33.305 CC test/app/jsoncat/jsoncat.o 00:03:33.305 CC test/app/stub/stub.o 00:03:33.305 LINK vhost 00:03:33.305 LINK pci_ut 00:03:33.564 LINK jsoncat 00:03:33.564 CXX test/cpp_headers/blob_bdev.o 00:03:33.564 LINK stub 00:03:33.564 CC examples/blob/hello_world/hello_blob.o 00:03:33.564 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.822 CXX test/cpp_headers/blobfs.o 00:03:33.822 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:33.822 CC examples/blob/cli/blobcli.o 00:03:33.822 CC app/spdk_dd/spdk_dd.o 00:03:33.822 LINK hello_blob 00:03:33.822 CXX test/cpp_headers/blob.o 00:03:33.822 LINK accel_perf 00:03:34.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.082 CC examples/nvme/hello_world/hello_world.o 00:03:34.082 CXX test/cpp_headers/conf.o 00:03:34.082 CXX test/cpp_headers/config.o 00:03:34.340 LINK spdk_top 00:03:34.340 CC examples/nvme/reconnect/reconnect.o 00:03:34.340 LINK memory_ut 00:03:34.340 LINK spdk_dd 00:03:34.340 CXX test/cpp_headers/cpuset.o 00:03:34.340 LINK hello_world 00:03:34.340 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:34.340 LINK blobcli 00:03:34.340 CXX test/cpp_headers/crc16.o 00:03:34.598 LINK vhost_fuzz 00:03:34.598 CXX test/cpp_headers/crc32.o 00:03:34.598 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.598 CC examples/nvme/arbitration/arbitration.o 00:03:34.598 LINK hello_fsdev 00:03:34.598 CC app/fio/nvme/fio_plugin.o 00:03:34.598 LINK reconnect 00:03:34.598 CXX test/cpp_headers/crc64.o 00:03:34.598 CC examples/nvme/hotplug/hotplug.o 00:03:34.598 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.857 CC test/accel/dif/dif.o 00:03:34.857 CXX test/cpp_headers/dif.o 00:03:34.857 LINK cmb_copy 00:03:34.857 LINK iscsi_fuzz 00:03:34.857 LINK arbitration 00:03:34.857 CC examples/nvme/abort/abort.o 00:03:34.857 LINK hotplug 00:03:35.117 CXX test/cpp_headers/dma.o 00:03:35.117 CC app/fio/bdev/fio_plugin.o 00:03:35.117 LINK nvme_manage 00:03:35.117 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.117 CXX test/cpp_headers/endian.o 00:03:35.376 CC test/blobfs/mkfs/mkfs.o 00:03:35.376 CC test/event/event_perf/event_perf.o 00:03:35.376 LINK abort 00:03:35.376 LINK spdk_nvme 00:03:35.376 CXX test/cpp_headers/env_dpdk.o 00:03:35.376 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.376 CC test/event/reactor/reactor.o 00:03:35.376 LINK pmr_persistence 00:03:35.637 LINK event_perf 00:03:35.637 CC test/event/reactor_perf/reactor_perf.o 00:03:35.637 CXX test/cpp_headers/env.o 00:03:35.637 LINK mkfs 00:03:35.637 LINK reactor 00:03:35.637 LINK spdk_bdev 00:03:35.637 LINK hello_bdev 00:03:35.637 CC test/event/app_repeat/app_repeat.o 00:03:35.637 CXX test/cpp_headers/event.o 00:03:35.637 LINK dif 00:03:35.637 LINK reactor_perf 00:03:35.897 CXX test/cpp_headers/fd_group.o 00:03:35.897 CC test/event/scheduler/scheduler.o 00:03:35.897 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.897 LINK app_repeat 00:03:35.897 CXX test/cpp_headers/fd.o 00:03:35.897 CXX test/cpp_headers/file.o 00:03:35.897 CC test/nvme/aer/aer.o 00:03:35.897 CC test/nvme/reset/reset.o 00:03:36.161 CC test/lvol/esnap/esnap.o 00:03:36.161 CC test/nvme/sgl/sgl.o 00:03:36.161 CXX test/cpp_headers/fsdev.o 00:03:36.161 LINK scheduler 00:03:36.161 CC test/nvme/e2edp/nvme_dp.o 00:03:36.161 CC test/nvme/overhead/overhead.o 00:03:36.161 CC test/nvme/err_injection/err_injection.o 00:03:36.161 CXX test/cpp_headers/fsdev_module.o 00:03:36.420 LINK reset 00:03:36.420 LINK aer 00:03:36.420 LINK err_injection 00:03:36.420 LINK sgl 00:03:36.420 LINK nvme_dp 00:03:36.420 CXX test/cpp_headers/ftl.o 00:03:36.420 LINK overhead 00:03:36.420 CC test/bdev/bdevio/bdevio.o 00:03:36.679 CC test/nvme/startup/startup.o 00:03:36.679 CC test/nvme/reserve/reserve.o 00:03:36.679 CC test/nvme/simple_copy/simple_copy.o 00:03:36.679 CXX test/cpp_headers/fuse_dispatcher.o 00:03:36.679 CC test/nvme/connect_stress/connect_stress.o 00:03:36.679 CC test/nvme/boot_partition/boot_partition.o 00:03:36.679 LINK startup 00:03:36.679 CC test/nvme/compliance/nvme_compliance.o 00:03:36.938 CXX test/cpp_headers/gpt_spec.o 00:03:36.938 LINK reserve 00:03:36.938 LINK connect_stress 00:03:36.938 LINK boot_partition 00:03:36.938 LINK simple_copy 00:03:36.938 LINK bdevio 00:03:36.938 CXX test/cpp_headers/hexlify.o 00:03:36.938 LINK bdevperf 00:03:36.938 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.196 CXX test/cpp_headers/histogram_data.o 00:03:37.196 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.196 CXX test/cpp_headers/idxd.o 00:03:37.196 CC test/nvme/fdp/fdp.o 00:03:37.196 LINK nvme_compliance 00:03:37.196 CXX test/cpp_headers/idxd_spec.o 00:03:37.196 CC test/nvme/cuse/cuse.o 00:03:37.196 CXX test/cpp_headers/init.o 00:03:37.455 LINK fused_ordering 00:03:37.455 CXX test/cpp_headers/ioat.o 00:03:37.455 CXX test/cpp_headers/ioat_spec.o 00:03:37.455 CXX test/cpp_headers/iscsi_spec.o 00:03:37.455 LINK doorbell_aers 00:03:37.455 CC examples/nvmf/nvmf/nvmf.o 00:03:37.455 CXX test/cpp_headers/json.o 00:03:37.455 CXX test/cpp_headers/jsonrpc.o 00:03:37.455 CXX test/cpp_headers/keyring.o 00:03:37.455 CXX test/cpp_headers/keyring_module.o 00:03:37.713 LINK fdp 00:03:37.713 CXX test/cpp_headers/likely.o 00:03:37.713 CXX test/cpp_headers/log.o 00:03:37.713 CXX test/cpp_headers/lvol.o 00:03:37.713 CXX test/cpp_headers/md5.o 00:03:37.713 CXX test/cpp_headers/memory.o 00:03:37.713 CXX test/cpp_headers/mmio.o 00:03:37.713 CXX test/cpp_headers/nbd.o 00:03:37.713 LINK nvmf 00:03:37.972 CXX test/cpp_headers/net.o 00:03:37.972 CXX test/cpp_headers/notify.o 00:03:37.972 CXX test/cpp_headers/nvme.o 00:03:37.972 CXX test/cpp_headers/nvme_intel.o 00:03:37.972 CXX test/cpp_headers/nvme_ocssd.o 00:03:37.972 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:37.972 CXX test/cpp_headers/nvme_spec.o 00:03:37.972 CXX test/cpp_headers/nvme_zns.o 00:03:37.972 CXX test/cpp_headers/nvmf_cmd.o 00:03:37.972 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.230 CXX test/cpp_headers/nvmf.o 00:03:38.230 CXX test/cpp_headers/nvmf_spec.o 00:03:38.230 CXX test/cpp_headers/nvmf_transport.o 00:03:38.230 CXX test/cpp_headers/opal.o 00:03:38.230 CXX test/cpp_headers/opal_spec.o 00:03:38.230 CXX test/cpp_headers/pci_ids.o 00:03:38.230 CXX test/cpp_headers/pipe.o 00:03:38.230 CXX test/cpp_headers/queue.o 00:03:38.230 CXX test/cpp_headers/reduce.o 00:03:38.230 CXX test/cpp_headers/rpc.o 00:03:38.230 CXX test/cpp_headers/scheduler.o 00:03:38.489 CXX test/cpp_headers/scsi.o 00:03:38.489 CXX test/cpp_headers/scsi_spec.o 00:03:38.489 CXX test/cpp_headers/sock.o 00:03:38.489 CXX test/cpp_headers/stdinc.o 00:03:38.489 CXX test/cpp_headers/string.o 00:03:38.489 CXX test/cpp_headers/thread.o 00:03:38.489 CXX test/cpp_headers/trace.o 00:03:38.489 CXX test/cpp_headers/trace_parser.o 00:03:38.489 CXX test/cpp_headers/tree.o 00:03:38.489 CXX test/cpp_headers/ublk.o 00:03:38.489 CXX test/cpp_headers/util.o 00:03:38.747 CXX test/cpp_headers/uuid.o 00:03:38.747 CXX test/cpp_headers/version.o 00:03:38.747 CXX test/cpp_headers/vfio_user_pci.o 00:03:38.747 CXX test/cpp_headers/vfio_user_spec.o 00:03:38.747 CXX test/cpp_headers/vhost.o 00:03:38.747 CXX test/cpp_headers/vmd.o 00:03:38.747 CXX test/cpp_headers/xor.o 00:03:38.747 CXX test/cpp_headers/zipf.o 00:03:38.747 LINK cuse 00:03:44.020 LINK esnap 00:03:44.020 00:03:44.020 real 1m39.421s 00:03:44.020 user 8m39.694s 00:03:44.020 sys 1m51.270s 00:03:44.020 17:38:10 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:44.020 17:38:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:44.020 ************************************ 00:03:44.020 END TEST make 00:03:44.020 ************************************ 00:03:44.020 17:38:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:44.020 17:38:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:44.020 17:38:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:44.020 17:38:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.020 17:38:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:44.020 17:38:10 -- pm/common@44 -- $ pid=5469 00:03:44.020 17:38:10 -- pm/common@50 -- $ kill -TERM 5469 00:03:44.020 17:38:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.020 17:38:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:44.020 17:38:10 -- pm/common@44 -- $ pid=5471 00:03:44.020 17:38:10 -- pm/common@50 -- $ kill -TERM 5471 00:03:44.020 17:38:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:44.020 17:38:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:44.020 17:38:10 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:44.020 17:38:10 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:44.020 17:38:10 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.020 17:38:10 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.020 17:38:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.020 17:38:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.020 17:38:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.020 17:38:10 -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.020 17:38:10 -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.020 17:38:10 -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.020 17:38:10 -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.020 17:38:10 -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.020 17:38:10 -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.020 17:38:10 -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.020 17:38:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.020 17:38:10 -- scripts/common.sh@344 -- # case "$op" in 00:03:44.020 17:38:10 -- scripts/common.sh@345 -- # : 1 00:03:44.020 17:38:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.020 17:38:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.020 17:38:10 -- scripts/common.sh@365 -- # decimal 1 00:03:44.020 17:38:10 -- scripts/common.sh@353 -- # local d=1 00:03:44.020 17:38:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.020 17:38:10 -- scripts/common.sh@355 -- # echo 1 00:03:44.020 17:38:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.020 17:38:10 -- scripts/common.sh@366 -- # decimal 2 00:03:44.020 17:38:10 -- scripts/common.sh@353 -- # local d=2 00:03:44.020 17:38:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.020 17:38:10 -- scripts/common.sh@355 -- # echo 2 00:03:44.020 17:38:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.020 17:38:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.020 17:38:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.020 17:38:10 -- scripts/common.sh@368 -- # return 0 00:03:44.020 17:38:10 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.020 17:38:10 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.020 --rc genhtml_branch_coverage=1 00:03:44.020 --rc genhtml_function_coverage=1 00:03:44.020 --rc genhtml_legend=1 00:03:44.020 --rc geninfo_all_blocks=1 00:03:44.020 --rc geninfo_unexecuted_blocks=1 00:03:44.020 00:03:44.020 ' 00:03:44.020 17:38:10 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.020 --rc genhtml_branch_coverage=1 00:03:44.020 --rc genhtml_function_coverage=1 00:03:44.020 --rc genhtml_legend=1 00:03:44.020 --rc geninfo_all_blocks=1 00:03:44.020 --rc geninfo_unexecuted_blocks=1 00:03:44.020 00:03:44.020 ' 00:03:44.020 17:38:10 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.020 --rc genhtml_branch_coverage=1 00:03:44.020 --rc genhtml_function_coverage=1 00:03:44.020 --rc genhtml_legend=1 00:03:44.020 --rc geninfo_all_blocks=1 00:03:44.020 --rc geninfo_unexecuted_blocks=1 00:03:44.020 00:03:44.020 ' 00:03:44.020 17:38:10 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.020 --rc genhtml_branch_coverage=1 00:03:44.020 --rc genhtml_function_coverage=1 00:03:44.020 --rc genhtml_legend=1 00:03:44.020 --rc geninfo_all_blocks=1 00:03:44.020 --rc geninfo_unexecuted_blocks=1 00:03:44.020 00:03:44.020 ' 00:03:44.020 17:38:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:44.020 17:38:10 -- nvmf/common.sh@7 -- # uname -s 00:03:44.020 17:38:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.020 17:38:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.021 17:38:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.021 17:38:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.021 17:38:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.021 17:38:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.021 17:38:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.021 17:38:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.021 17:38:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.021 17:38:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.021 17:38:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6aee6518-a1c0-4f87-b451-1b81ad9dbce6 00:03:44.021 17:38:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=6aee6518-a1c0-4f87-b451-1b81ad9dbce6 00:03:44.021 17:38:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.021 17:38:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.021 17:38:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.021 17:38:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.021 17:38:11 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:44.021 17:38:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.021 17:38:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.021 17:38:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.021 17:38:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.021 17:38:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.021 17:38:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.021 17:38:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.021 17:38:11 -- paths/export.sh@5 -- # export PATH 00:03:44.021 17:38:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.021 17:38:11 -- nvmf/common.sh@51 -- # : 0 00:03:44.021 17:38:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.021 17:38:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.021 17:38:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.021 17:38:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.021 17:38:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.021 17:38:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.021 17:38:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.021 17:38:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.021 17:38:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.021 17:38:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:44.021 17:38:11 -- spdk/autotest.sh@32 -- # uname -s 00:03:44.021 17:38:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:44.021 17:38:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:44.021 17:38:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:44.021 17:38:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:44.021 17:38:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:44.021 17:38:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:44.021 17:38:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:44.021 17:38:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:44.021 17:38:11 -- spdk/autotest.sh@48 -- # udevadm_pid=54581 00:03:44.021 17:38:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:44.021 17:38:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:44.021 17:38:11 -- pm/common@17 -- # local monitor 00:03:44.021 17:38:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.021 17:38:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.021 17:38:11 -- pm/common@21 -- # date +%s 00:03:44.021 17:38:11 -- pm/common@25 -- # sleep 1 00:03:44.021 17:38:11 -- pm/common@21 -- # date +%s 00:03:44.021 17:38:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732124291 00:03:44.021 17:38:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732124291 00:03:44.021 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732124291_collect-cpu-load.pm.log 00:03:44.021 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732124291_collect-vmstat.pm.log 00:03:44.962 17:38:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.962 17:38:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.962 17:38:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.962 17:38:12 -- common/autotest_common.sh@10 -- # set +x 00:03:44.962 17:38:12 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.962 17:38:12 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.962 17:38:12 -- common/autotest_common.sh@10 -- # set +x 00:03:45.221 17:38:12 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:45.221 17:38:12 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:45.221 17:38:12 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:45.221 17:38:12 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:45.221 17:38:12 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:45.221 17:38:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.221 17:38:12 -- common/autotest_common.sh@1457 -- # uname 00:03:45.221 17:38:12 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:45.221 17:38:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.221 17:38:12 -- common/autotest_common.sh@1477 -- # uname 00:03:45.221 17:38:12 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:45.221 17:38:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:45.221 17:38:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:45.221 lcov: LCOV version 1.15 00:03:45.221 17:38:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:03.319 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:03.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:21.405 17:38:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:21.405 17:38:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.405 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:04:21.405 17:38:47 -- spdk/autotest.sh@78 -- # rm -f 00:04:21.405 17:38:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.405 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:21.405 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:21.405 17:38:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:21.405 17:38:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:21.405 17:38:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:21.405 17:38:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:21.405 17:38:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:21.405 17:38:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:21.405 17:38:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:21.405 17:38:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:21.405 17:38:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:21.405 17:38:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:21.405 17:38:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:21.405 17:38:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:21.405 17:38:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:21.405 17:38:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:21.405 17:38:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:21.405 17:38:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:21.405 17:38:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:21.405 17:38:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.405 17:38:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:21.405 17:38:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.405 17:38:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.405 17:38:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:21.405 17:38:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:21.405 17:38:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:21.405 No valid GPT data, bailing 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # pt= 00:04:21.405 17:38:48 -- scripts/common.sh@395 -- # return 1 00:04:21.405 17:38:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:21.405 1+0 records in 00:04:21.405 1+0 records out 00:04:21.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062656 s, 167 MB/s 00:04:21.405 17:38:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.405 17:38:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.405 17:38:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:21.405 17:38:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:21.405 17:38:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:21.405 No valid GPT data, bailing 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # pt= 00:04:21.405 17:38:48 -- scripts/common.sh@395 -- # return 1 00:04:21.405 17:38:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:21.405 1+0 records in 00:04:21.405 1+0 records out 00:04:21.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426346 s, 246 MB/s 00:04:21.405 17:38:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.405 17:38:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.405 17:38:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:21.405 17:38:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:21.405 17:38:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:21.405 No valid GPT data, bailing 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # pt= 00:04:21.405 17:38:48 -- scripts/common.sh@395 -- # return 1 00:04:21.405 17:38:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:21.405 1+0 records in 00:04:21.405 1+0 records out 00:04:21.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495285 s, 212 MB/s 00:04:21.405 17:38:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.405 17:38:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.405 17:38:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:21.405 17:38:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:21.405 17:38:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:21.405 No valid GPT data, bailing 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:21.405 17:38:48 -- scripts/common.sh@394 -- # pt= 00:04:21.405 17:38:48 -- scripts/common.sh@395 -- # return 1 00:04:21.405 17:38:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:21.405 1+0 records in 00:04:21.405 1+0 records out 00:04:21.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386288 s, 271 MB/s 00:04:21.405 17:38:48 -- spdk/autotest.sh@105 -- # sync 00:04:21.405 17:38:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:21.405 17:38:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:21.405 17:38:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.698 17:38:51 -- spdk/autotest.sh@111 -- # uname -s 00:04:24.698 17:38:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:24.698 17:38:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:24.698 17:38:51 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:24.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.957 Hugepages 00:04:24.957 node hugesize free / total 00:04:24.957 node0 1048576kB 0 / 0 00:04:24.957 node0 2048kB 0 / 0 00:04:24.957 00:04:24.957 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.957 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.216 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.217 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:25.217 17:38:52 -- spdk/autotest.sh@117 -- # uname -s 00:04:25.217 17:38:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:25.217 17:38:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:25.217 17:38:52 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.155 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.155 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.413 17:38:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:27.350 17:38:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:27.350 17:38:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:27.350 17:38:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.350 17:38:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:27.350 17:38:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:27.350 17:38:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:27.350 17:38:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.350 17:38:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.350 17:38:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:27.350 17:38:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:27.350 17:38:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:27.350 17:38:54 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.922 Waiting for block devices as requested 00:04:27.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:27.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.182 17:38:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.182 17:38:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:28.182 17:38:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.182 17:38:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.182 17:38:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.182 17:38:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1543 -- # continue 00:04:28.182 17:38:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.182 17:38:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:28.182 17:38:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.182 17:38:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.182 17:38:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.182 17:38:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.182 17:38:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.182 17:38:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.182 17:38:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.182 17:38:55 -- common/autotest_common.sh@1543 -- # continue 00:04:28.182 17:38:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:28.182 17:38:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.182 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:28.182 17:38:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:28.182 17:38:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.182 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:28.182 17:38:55 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.121 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.121 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.381 17:38:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:29.381 17:38:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.381 17:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:29.381 17:38:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:29.381 17:38:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:29.381 17:38:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.381 17:38:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:29.381 17:38:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:29.381 17:38:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:29.381 17:38:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:29.381 17:38:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:29.381 17:38:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:29.381 17:38:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:29.381 17:38:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.381 17:38:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.381 17:38:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.381 17:38:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:29.381 17:38:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:29.381 17:38:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.381 17:38:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:29.381 17:38:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.381 17:38:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.381 17:38:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.381 17:38:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:29.381 17:38:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.381 17:38:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.381 17:38:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:29.381 17:38:56 -- common/autotest_common.sh@1572 -- # return 0 00:04:29.381 17:38:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:29.381 17:38:56 -- common/autotest_common.sh@1580 -- # return 0 00:04:29.381 17:38:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:29.381 17:38:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:29.381 17:38:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.381 17:38:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.381 17:38:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:29.381 17:38:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.381 17:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:29.381 17:38:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:29.381 17:38:56 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.381 17:38:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.381 17:38:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.381 17:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:29.381 ************************************ 00:04:29.381 START TEST env 00:04:29.381 ************************************ 00:04:29.381 17:38:56 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.640 * Looking for test storage... 00:04:29.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:29.640 17:38:56 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.640 17:38:56 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.640 17:38:56 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.640 17:38:56 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.640 17:38:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.640 17:38:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.640 17:38:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.640 17:38:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.640 17:38:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.640 17:38:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.640 17:38:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.640 17:38:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.640 17:38:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.640 17:38:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.640 17:38:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.640 17:38:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:29.640 17:38:56 env -- scripts/common.sh@345 -- # : 1 00:04:29.640 17:38:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.640 17:38:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.640 17:38:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:29.640 17:38:56 env -- scripts/common.sh@353 -- # local d=1 00:04:29.640 17:38:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.640 17:38:56 env -- scripts/common.sh@355 -- # echo 1 00:04:29.640 17:38:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.640 17:38:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:29.640 17:38:56 env -- scripts/common.sh@353 -- # local d=2 00:04:29.640 17:38:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.640 17:38:56 env -- scripts/common.sh@355 -- # echo 2 00:04:29.640 17:38:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.640 17:38:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.640 17:38:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.640 17:38:56 env -- scripts/common.sh@368 -- # return 0 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.641 --rc genhtml_branch_coverage=1 00:04:29.641 --rc genhtml_function_coverage=1 00:04:29.641 --rc genhtml_legend=1 00:04:29.641 --rc geninfo_all_blocks=1 00:04:29.641 --rc geninfo_unexecuted_blocks=1 00:04:29.641 00:04:29.641 ' 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.641 --rc genhtml_branch_coverage=1 00:04:29.641 --rc genhtml_function_coverage=1 00:04:29.641 --rc genhtml_legend=1 00:04:29.641 --rc geninfo_all_blocks=1 00:04:29.641 --rc geninfo_unexecuted_blocks=1 00:04:29.641 00:04:29.641 ' 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.641 --rc genhtml_branch_coverage=1 00:04:29.641 --rc genhtml_function_coverage=1 00:04:29.641 --rc genhtml_legend=1 00:04:29.641 --rc geninfo_all_blocks=1 00:04:29.641 --rc geninfo_unexecuted_blocks=1 00:04:29.641 00:04:29.641 ' 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.641 --rc genhtml_branch_coverage=1 00:04:29.641 --rc genhtml_function_coverage=1 00:04:29.641 --rc genhtml_legend=1 00:04:29.641 --rc geninfo_all_blocks=1 00:04:29.641 --rc geninfo_unexecuted_blocks=1 00:04:29.641 00:04:29.641 ' 00:04:29.641 17:38:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.641 17:38:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.641 17:38:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.641 ************************************ 00:04:29.641 START TEST env_memory 00:04:29.641 ************************************ 00:04:29.641 17:38:56 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.641 00:04:29.641 00:04:29.641 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.641 http://cunit.sourceforge.net/ 00:04:29.641 00:04:29.641 00:04:29.641 Suite: memory 00:04:29.641 Test: alloc and free memory map ...[2024-11-20 17:38:56.760062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.641 passed 00:04:29.901 Test: mem map translation ...[2024-11-20 17:38:56.819204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.901 [2024-11-20 17:38:56.819281] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.901 [2024-11-20 17:38:56.819348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.901 [2024-11-20 17:38:56.819372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.901 passed 00:04:29.901 Test: mem map registration ...[2024-11-20 17:38:56.898383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:29.901 [2024-11-20 17:38:56.898450] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:29.901 passed 00:04:29.901 Test: mem map adjacent registrations ...passed 00:04:29.901 00:04:29.901 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.901 suites 1 1 n/a 0 0 00:04:29.901 tests 4 4 4 0 0 00:04:29.901 asserts 152 152 152 0 n/a 00:04:29.901 00:04:29.901 Elapsed time = 0.274 seconds 00:04:29.901 00:04:29.901 real 0m0.310s 00:04:29.901 user 0m0.276s 00:04:29.901 sys 0m0.029s 00:04:29.901 17:38:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.901 17:38:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 ************************************ 00:04:29.901 END TEST env_memory 00:04:29.901 ************************************ 00:04:29.901 17:38:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:29.901 17:38:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.901 17:38:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.901 17:38:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 ************************************ 00:04:29.901 START TEST env_vtophys 00:04:29.901 ************************************ 00:04:29.901 17:38:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.161 EAL: lib.eal log level changed from notice to debug 00:04:30.161 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.161 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.161 EAL: Maximum logical cores by configuration: 128 00:04:30.161 EAL: Detected CPU lcores: 10 00:04:30.161 EAL: Detected NUMA nodes: 1 00:04:30.161 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.161 EAL: Detected shared linkage of DPDK 00:04:30.161 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.161 EAL: Selected IOVA mode 'PA' 00:04:30.161 EAL: Probing VFIO support... 00:04:30.161 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.161 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.161 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.161 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.161 EAL: Setting up physically contiguous memory... 00:04:30.161 EAL: Setting maximum number of open files to 524288 00:04:30.161 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.161 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.161 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.161 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.161 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.161 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.161 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.161 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.161 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.161 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.161 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.161 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.161 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.161 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.161 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.161 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.161 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.161 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.161 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.161 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.161 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.161 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.161 EAL: Hugepages will be freed exactly as allocated. 00:04:30.161 EAL: No shared files mode enabled, IPC is disabled 00:04:30.161 EAL: No shared files mode enabled, IPC is disabled 00:04:30.161 EAL: TSC frequency is ~2290000 KHz 00:04:30.161 EAL: Main lcore 0 is ready (tid=7fa5762bea40;cpuset=[0]) 00:04:30.161 EAL: Trying to obtain current memory policy. 00:04:30.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.161 EAL: Restoring previous memory policy: 0 00:04:30.161 EAL: request: mp_malloc_sync 00:04:30.161 EAL: No shared files mode enabled, IPC is disabled 00:04:30.161 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.161 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.161 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.161 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.161 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.161 00:04:30.161 00:04:30.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.161 http://cunit.sourceforge.net/ 00:04:30.161 00:04:30.161 00:04:30.161 Suite: components_suite 00:04:30.732 Test: vtophys_malloc_test ...passed 00:04:30.732 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.732 EAL: Restoring previous memory policy: 4 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.732 EAL: Trying to obtain current memory policy. 00:04:30.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.732 EAL: Restoring previous memory policy: 4 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.732 EAL: Trying to obtain current memory policy. 00:04:30.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.732 EAL: Restoring previous memory policy: 4 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.732 EAL: Trying to obtain current memory policy. 00:04:30.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.732 EAL: Restoring previous memory policy: 4 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.732 EAL: Trying to obtain current memory policy. 00:04:30.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.732 EAL: Restoring previous memory policy: 4 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.732 EAL: request: mp_malloc_sync 00:04:30.732 EAL: No shared files mode enabled, IPC is disabled 00:04:30.732 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.991 EAL: Trying to obtain current memory policy. 00:04:30.991 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.991 EAL: Restoring previous memory policy: 4 00:04:30.991 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.991 EAL: request: mp_malloc_sync 00:04:30.991 EAL: No shared files mode enabled, IPC is disabled 00:04:30.991 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.991 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.991 EAL: request: mp_malloc_sync 00:04:30.991 EAL: No shared files mode enabled, IPC is disabled 00:04:30.991 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.251 EAL: Trying to obtain current memory policy. 00:04:31.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.251 EAL: Restoring previous memory policy: 4 00:04:31.251 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.251 EAL: request: mp_malloc_sync 00:04:31.251 EAL: No shared files mode enabled, IPC is disabled 00:04:31.251 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.511 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.511 EAL: request: mp_malloc_sync 00:04:31.511 EAL: No shared files mode enabled, IPC is disabled 00:04:31.511 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.770 EAL: Trying to obtain current memory policy. 00:04:31.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.770 EAL: Restoring previous memory policy: 4 00:04:31.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.770 EAL: request: mp_malloc_sync 00:04:31.770 EAL: No shared files mode enabled, IPC is disabled 00:04:31.770 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.337 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.337 EAL: request: mp_malloc_sync 00:04:32.337 EAL: No shared files mode enabled, IPC is disabled 00:04:32.337 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.904 EAL: Trying to obtain current memory policy. 00:04:32.904 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.904 EAL: Restoring previous memory policy: 4 00:04:32.904 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.904 EAL: request: mp_malloc_sync 00:04:32.904 EAL: No shared files mode enabled, IPC is disabled 00:04:32.904 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.101 EAL: request: mp_malloc_sync 00:04:34.101 EAL: No shared files mode enabled, IPC is disabled 00:04:34.101 EAL: Heap on socket 0 was shrunk by 514MB 00:04:35.037 EAL: Trying to obtain current memory policy. 00:04:35.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.296 EAL: Restoring previous memory policy: 4 00:04:35.296 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.296 EAL: request: mp_malloc_sync 00:04:35.296 EAL: No shared files mode enabled, IPC is disabled 00:04:35.296 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.203 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.203 EAL: request: mp_malloc_sync 00:04:37.203 EAL: No shared files mode enabled, IPC is disabled 00:04:37.203 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:39.738 passed 00:04:39.738 00:04:39.738 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.738 suites 1 1 n/a 0 0 00:04:39.738 tests 2 2 2 0 0 00:04:39.738 asserts 5768 5768 5768 0 n/a 00:04:39.738 00:04:39.738 Elapsed time = 8.992 seconds 00:04:39.738 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.738 EAL: request: mp_malloc_sync 00:04:39.738 EAL: No shared files mode enabled, IPC is disabled 00:04:39.738 EAL: Heap on socket 0 was shrunk by 2MB 00:04:39.738 EAL: No shared files mode enabled, IPC is disabled 00:04:39.738 EAL: No shared files mode enabled, IPC is disabled 00:04:39.738 EAL: No shared files mode enabled, IPC is disabled 00:04:39.738 00:04:39.738 real 0m9.318s 00:04:39.738 user 0m8.345s 00:04:39.738 sys 0m0.811s 00:04:39.738 17:39:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.738 ************************************ 00:04:39.738 END TEST env_vtophys 00:04:39.738 ************************************ 00:04:39.738 17:39:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:39.738 17:39:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:39.738 17:39:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.738 17:39:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.738 17:39:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.738 ************************************ 00:04:39.738 START TEST env_pci 00:04:39.738 ************************************ 00:04:39.738 17:39:06 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:39.738 00:04:39.738 00:04:39.738 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.738 http://cunit.sourceforge.net/ 00:04:39.738 00:04:39.738 00:04:39.738 Suite: pci 00:04:39.738 Test: pci_hook ...[2024-11-20 17:39:06.477042] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56970 has claimed it 00:04:39.738 EAL: Cannot find device (10000:00:01.0) 00:04:39.738 passed 00:04:39.738 00:04:39.738 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.738 suites 1 1 n/a 0 0 00:04:39.738 tests 1 1 1 0 0 00:04:39.738 asserts 25 25 25 0 n/a 00:04:39.738 00:04:39.738 Elapsed time = 0.009 seconds 00:04:39.738 EAL: Failed to attach device on primary process 00:04:39.738 00:04:39.738 real 0m0.112s 00:04:39.738 user 0m0.049s 00:04:39.738 sys 0m0.061s 00:04:39.738 17:39:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.738 17:39:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:39.738 ************************************ 00:04:39.738 END TEST env_pci 00:04:39.738 ************************************ 00:04:39.738 17:39:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:39.738 17:39:06 env -- env/env.sh@15 -- # uname 00:04:39.738 17:39:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:39.738 17:39:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:39.738 17:39:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.738 17:39:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:39.738 17:39:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.738 17:39:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.738 ************************************ 00:04:39.738 START TEST env_dpdk_post_init 00:04:39.738 ************************************ 00:04:39.738 17:39:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.738 EAL: Detected CPU lcores: 10 00:04:39.738 EAL: Detected NUMA nodes: 1 00:04:39.738 EAL: Detected shared linkage of DPDK 00:04:39.738 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.738 EAL: Selected IOVA mode 'PA' 00:04:39.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.738 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:39.738 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:39.738 Starting DPDK initialization... 00:04:39.738 Starting SPDK post initialization... 00:04:39.738 SPDK NVMe probe 00:04:39.738 Attaching to 0000:00:10.0 00:04:39.738 Attaching to 0000:00:11.0 00:04:39.738 Attached to 0000:00:10.0 00:04:39.738 Attached to 0000:00:11.0 00:04:39.738 Cleaning up... 00:04:39.997 00:04:39.997 real 0m0.305s 00:04:39.997 user 0m0.104s 00:04:39.997 sys 0m0.101s 00:04:39.997 17:39:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.997 ************************************ 00:04:39.997 END TEST env_dpdk_post_init 00:04:39.997 ************************************ 00:04:39.997 17:39:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.997 17:39:06 env -- env/env.sh@26 -- # uname 00:04:39.997 17:39:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.997 17:39:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.997 17:39:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.997 17:39:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.997 17:39:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.997 ************************************ 00:04:39.997 START TEST env_mem_callbacks 00:04:39.997 ************************************ 00:04:39.997 17:39:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.997 EAL: Detected CPU lcores: 10 00:04:39.997 EAL: Detected NUMA nodes: 1 00:04:39.997 EAL: Detected shared linkage of DPDK 00:04:39.997 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.997 EAL: Selected IOVA mode 'PA' 00:04:39.997 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.997 00:04:39.997 00:04:39.997 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.997 http://cunit.sourceforge.net/ 00:04:39.997 00:04:39.997 00:04:39.997 Suite: memory 00:04:39.997 Test: test ... 00:04:39.997 register 0x200000200000 2097152 00:04:39.997 malloc 3145728 00:04:40.256 register 0x200000400000 4194304 00:04:40.256 buf 0x2000004fffc0 len 3145728 PASSED 00:04:40.256 malloc 64 00:04:40.256 buf 0x2000004ffec0 len 64 PASSED 00:04:40.256 malloc 4194304 00:04:40.256 register 0x200000800000 6291456 00:04:40.256 buf 0x2000009fffc0 len 4194304 PASSED 00:04:40.256 free 0x2000004fffc0 3145728 00:04:40.256 free 0x2000004ffec0 64 00:04:40.256 unregister 0x200000400000 4194304 PASSED 00:04:40.256 free 0x2000009fffc0 4194304 00:04:40.256 unregister 0x200000800000 6291456 PASSED 00:04:40.256 malloc 8388608 00:04:40.256 register 0x200000400000 10485760 00:04:40.256 buf 0x2000005fffc0 len 8388608 PASSED 00:04:40.256 free 0x2000005fffc0 8388608 00:04:40.256 unregister 0x200000400000 10485760 PASSED 00:04:40.256 passed 00:04:40.256 00:04:40.256 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.256 suites 1 1 n/a 0 0 00:04:40.256 tests 1 1 1 0 0 00:04:40.256 asserts 15 15 15 0 n/a 00:04:40.256 00:04:40.256 Elapsed time = 0.093 seconds 00:04:40.256 00:04:40.256 real 0m0.302s 00:04:40.256 user 0m0.127s 00:04:40.256 sys 0m0.071s 00:04:40.256 17:39:07 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.256 ************************************ 00:04:40.256 END TEST env_mem_callbacks 00:04:40.256 ************************************ 00:04:40.256 17:39:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:40.256 ************************************ 00:04:40.256 END TEST env 00:04:40.256 ************************************ 00:04:40.256 00:04:40.256 real 0m10.826s 00:04:40.256 user 0m9.103s 00:04:40.256 sys 0m1.369s 00:04:40.256 17:39:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.256 17:39:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.256 17:39:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:40.256 17:39:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.256 17:39:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.256 17:39:07 -- common/autotest_common.sh@10 -- # set +x 00:04:40.256 ************************************ 00:04:40.256 START TEST rpc 00:04:40.256 ************************************ 00:04:40.256 17:39:07 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:40.516 * Looking for test storage... 00:04:40.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.516 17:39:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.516 17:39:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.516 17:39:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.516 17:39:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.516 17:39:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.516 17:39:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.516 17:39:07 rpc -- scripts/common.sh@345 -- # : 1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.516 17:39:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.516 17:39:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.516 17:39:07 rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.516 17:39:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.516 17:39:07 rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.516 17:39:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.516 17:39:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.516 17:39:07 rpc -- scripts/common.sh@368 -- # return 0 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.516 --rc genhtml_branch_coverage=1 00:04:40.516 --rc genhtml_function_coverage=1 00:04:40.516 --rc genhtml_legend=1 00:04:40.516 --rc geninfo_all_blocks=1 00:04:40.516 --rc geninfo_unexecuted_blocks=1 00:04:40.516 00:04:40.516 ' 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.516 --rc genhtml_branch_coverage=1 00:04:40.516 --rc genhtml_function_coverage=1 00:04:40.516 --rc genhtml_legend=1 00:04:40.516 --rc geninfo_all_blocks=1 00:04:40.516 --rc geninfo_unexecuted_blocks=1 00:04:40.516 00:04:40.516 ' 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.516 --rc genhtml_branch_coverage=1 00:04:40.516 --rc genhtml_function_coverage=1 00:04:40.516 --rc genhtml_legend=1 00:04:40.516 --rc geninfo_all_blocks=1 00:04:40.516 --rc geninfo_unexecuted_blocks=1 00:04:40.516 00:04:40.516 ' 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.516 --rc genhtml_branch_coverage=1 00:04:40.516 --rc genhtml_function_coverage=1 00:04:40.516 --rc genhtml_legend=1 00:04:40.516 --rc geninfo_all_blocks=1 00:04:40.516 --rc geninfo_unexecuted_blocks=1 00:04:40.516 00:04:40.516 ' 00:04:40.516 17:39:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57103 00:04:40.516 17:39:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.516 17:39:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57103 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 57103 ']' 00:04:40.516 17:39:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.516 17:39:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.775 [2024-11-20 17:39:07.756881] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:40.775 [2024-11-20 17:39:07.757132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57103 ] 00:04:40.775 [2024-11-20 17:39:07.922544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.034 [2024-11-20 17:39:08.079290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:41.034 [2024-11-20 17:39:08.079513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57103' to capture a snapshot of events at runtime. 00:04:41.034 [2024-11-20 17:39:08.079568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:41.034 [2024-11-20 17:39:08.079611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:41.034 [2024-11-20 17:39:08.079623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57103 for offline analysis/debug. 00:04:41.034 [2024-11-20 17:39:08.081264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.413 17:39:09 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.413 17:39:09 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:42.413 17:39:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.413 17:39:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.413 17:39:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:42.413 17:39:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:42.413 17:39:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.413 17:39:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.413 17:39:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.413 ************************************ 00:04:42.413 START TEST rpc_integrity 00:04:42.413 ************************************ 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.413 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.413 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.413 { 00:04:42.413 "name": "Malloc0", 00:04:42.413 "aliases": [ 00:04:42.413 "be41ce84-e8de-4cda-998e-5e014112c0ca" 00:04:42.413 ], 00:04:42.414 "product_name": "Malloc disk", 00:04:42.414 "block_size": 512, 00:04:42.414 "num_blocks": 16384, 00:04:42.414 "uuid": "be41ce84-e8de-4cda-998e-5e014112c0ca", 00:04:42.414 "assigned_rate_limits": { 00:04:42.414 "rw_ios_per_sec": 0, 00:04:42.414 "rw_mbytes_per_sec": 0, 00:04:42.414 "r_mbytes_per_sec": 0, 00:04:42.414 "w_mbytes_per_sec": 0 00:04:42.414 }, 00:04:42.414 "claimed": false, 00:04:42.414 "zoned": false, 00:04:42.414 "supported_io_types": { 00:04:42.414 "read": true, 00:04:42.414 "write": true, 00:04:42.414 "unmap": true, 00:04:42.414 "flush": true, 00:04:42.414 "reset": true, 00:04:42.414 "nvme_admin": false, 00:04:42.414 "nvme_io": false, 00:04:42.414 "nvme_io_md": false, 00:04:42.414 "write_zeroes": true, 00:04:42.414 "zcopy": true, 00:04:42.414 "get_zone_info": false, 00:04:42.414 "zone_management": false, 00:04:42.414 "zone_append": false, 00:04:42.414 "compare": false, 00:04:42.414 "compare_and_write": false, 00:04:42.414 "abort": true, 00:04:42.414 "seek_hole": false, 00:04:42.414 "seek_data": false, 00:04:42.414 "copy": true, 00:04:42.414 "nvme_iov_md": false 00:04:42.414 }, 00:04:42.414 "memory_domains": [ 00:04:42.414 { 00:04:42.414 "dma_device_id": "system", 00:04:42.414 "dma_device_type": 1 00:04:42.414 }, 00:04:42.414 { 00:04:42.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.414 "dma_device_type": 2 00:04:42.414 } 00:04:42.414 ], 00:04:42.414 "driver_specific": {} 00:04:42.414 } 00:04:42.414 ]' 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.414 [2024-11-20 17:39:09.472243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:42.414 [2024-11-20 17:39:09.472371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.414 [2024-11-20 17:39:09.472435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:42.414 [2024-11-20 17:39:09.472460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.414 [2024-11-20 17:39:09.475807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.414 [2024-11-20 17:39:09.475887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.414 Passthru0 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.414 { 00:04:42.414 "name": "Malloc0", 00:04:42.414 "aliases": [ 00:04:42.414 "be41ce84-e8de-4cda-998e-5e014112c0ca" 00:04:42.414 ], 00:04:42.414 "product_name": "Malloc disk", 00:04:42.414 "block_size": 512, 00:04:42.414 "num_blocks": 16384, 00:04:42.414 "uuid": "be41ce84-e8de-4cda-998e-5e014112c0ca", 00:04:42.414 "assigned_rate_limits": { 00:04:42.414 "rw_ios_per_sec": 0, 00:04:42.414 "rw_mbytes_per_sec": 0, 00:04:42.414 "r_mbytes_per_sec": 0, 00:04:42.414 "w_mbytes_per_sec": 0 00:04:42.414 }, 00:04:42.414 "claimed": true, 00:04:42.414 "claim_type": "exclusive_write", 00:04:42.414 "zoned": false, 00:04:42.414 "supported_io_types": { 00:04:42.414 "read": true, 00:04:42.414 "write": true, 00:04:42.414 "unmap": true, 00:04:42.414 "flush": true, 00:04:42.414 "reset": true, 00:04:42.414 "nvme_admin": false, 00:04:42.414 "nvme_io": false, 00:04:42.414 "nvme_io_md": false, 00:04:42.414 "write_zeroes": true, 00:04:42.414 "zcopy": true, 00:04:42.414 "get_zone_info": false, 00:04:42.414 "zone_management": false, 00:04:42.414 "zone_append": false, 00:04:42.414 "compare": false, 00:04:42.414 "compare_and_write": false, 00:04:42.414 "abort": true, 00:04:42.414 "seek_hole": false, 00:04:42.414 "seek_data": false, 00:04:42.414 "copy": true, 00:04:42.414 "nvme_iov_md": false 00:04:42.414 }, 00:04:42.414 "memory_domains": [ 00:04:42.414 { 00:04:42.414 "dma_device_id": "system", 00:04:42.414 "dma_device_type": 1 00:04:42.414 }, 00:04:42.414 { 00:04:42.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.414 "dma_device_type": 2 00:04:42.414 } 00:04:42.414 ], 00:04:42.414 "driver_specific": {} 00:04:42.414 }, 00:04:42.414 { 00:04:42.414 "name": "Passthru0", 00:04:42.414 "aliases": [ 00:04:42.414 "c8e5e817-1387-5ac6-920d-a2a4e064c73d" 00:04:42.414 ], 00:04:42.414 "product_name": "passthru", 00:04:42.414 "block_size": 512, 00:04:42.414 "num_blocks": 16384, 00:04:42.414 "uuid": "c8e5e817-1387-5ac6-920d-a2a4e064c73d", 00:04:42.414 "assigned_rate_limits": { 00:04:42.414 "rw_ios_per_sec": 0, 00:04:42.414 "rw_mbytes_per_sec": 0, 00:04:42.414 "r_mbytes_per_sec": 0, 00:04:42.414 "w_mbytes_per_sec": 0 00:04:42.414 }, 00:04:42.414 "claimed": false, 00:04:42.414 "zoned": false, 00:04:42.414 "supported_io_types": { 00:04:42.414 "read": true, 00:04:42.414 "write": true, 00:04:42.414 "unmap": true, 00:04:42.414 "flush": true, 00:04:42.414 "reset": true, 00:04:42.414 "nvme_admin": false, 00:04:42.414 "nvme_io": false, 00:04:42.414 "nvme_io_md": false, 00:04:42.414 "write_zeroes": true, 00:04:42.414 "zcopy": true, 00:04:42.414 "get_zone_info": false, 00:04:42.414 "zone_management": false, 00:04:42.414 "zone_append": false, 00:04:42.414 "compare": false, 00:04:42.414 "compare_and_write": false, 00:04:42.414 "abort": true, 00:04:42.414 "seek_hole": false, 00:04:42.414 "seek_data": false, 00:04:42.414 "copy": true, 00:04:42.414 "nvme_iov_md": false 00:04:42.414 }, 00:04:42.414 "memory_domains": [ 00:04:42.414 { 00:04:42.414 "dma_device_id": "system", 00:04:42.414 "dma_device_type": 1 00:04:42.414 }, 00:04:42.414 { 00:04:42.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.414 "dma_device_type": 2 00:04:42.414 } 00:04:42.414 ], 00:04:42.414 "driver_specific": { 00:04:42.414 "passthru": { 00:04:42.414 "name": "Passthru0", 00:04:42.414 "base_bdev_name": "Malloc0" 00:04:42.414 } 00:04:42.414 } 00:04:42.414 } 00:04:42.414 ]' 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.414 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.414 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.673 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.673 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.673 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.673 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.673 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.673 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.673 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:42.673 17:39:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.673 00:04:42.673 real 0m0.340s 00:04:42.673 user 0m0.171s 00:04:42.673 sys 0m0.058s 00:04:42.673 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.673 17:39:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.673 ************************************ 00:04:42.673 END TEST rpc_integrity 00:04:42.673 ************************************ 00:04:42.673 17:39:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:42.673 17:39:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.673 17:39:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.673 17:39:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.673 ************************************ 00:04:42.673 START TEST rpc_plugins 00:04:42.673 ************************************ 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:42.673 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.673 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:42.673 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.673 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.673 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:42.673 { 00:04:42.673 "name": "Malloc1", 00:04:42.673 "aliases": [ 00:04:42.673 "23c82e27-81ff-4d0e-ba20-29c6b1f171f1" 00:04:42.673 ], 00:04:42.673 "product_name": "Malloc disk", 00:04:42.673 "block_size": 4096, 00:04:42.673 "num_blocks": 256, 00:04:42.673 "uuid": "23c82e27-81ff-4d0e-ba20-29c6b1f171f1", 00:04:42.673 "assigned_rate_limits": { 00:04:42.673 "rw_ios_per_sec": 0, 00:04:42.673 "rw_mbytes_per_sec": 0, 00:04:42.673 "r_mbytes_per_sec": 0, 00:04:42.673 "w_mbytes_per_sec": 0 00:04:42.673 }, 00:04:42.673 "claimed": false, 00:04:42.673 "zoned": false, 00:04:42.673 "supported_io_types": { 00:04:42.673 "read": true, 00:04:42.673 "write": true, 00:04:42.674 "unmap": true, 00:04:42.674 "flush": true, 00:04:42.674 "reset": true, 00:04:42.674 "nvme_admin": false, 00:04:42.674 "nvme_io": false, 00:04:42.674 "nvme_io_md": false, 00:04:42.674 "write_zeroes": true, 00:04:42.674 "zcopy": true, 00:04:42.674 "get_zone_info": false, 00:04:42.674 "zone_management": false, 00:04:42.674 "zone_append": false, 00:04:42.674 "compare": false, 00:04:42.674 "compare_and_write": false, 00:04:42.674 "abort": true, 00:04:42.674 "seek_hole": false, 00:04:42.674 "seek_data": false, 00:04:42.674 "copy": true, 00:04:42.674 "nvme_iov_md": false 00:04:42.674 }, 00:04:42.674 "memory_domains": [ 00:04:42.674 { 00:04:42.674 "dma_device_id": "system", 00:04:42.674 "dma_device_type": 1 00:04:42.674 }, 00:04:42.674 { 00:04:42.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.674 "dma_device_type": 2 00:04:42.674 } 00:04:42.674 ], 00:04:42.674 "driver_specific": {} 00:04:42.674 } 00:04:42.674 ]' 00:04:42.674 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:42.674 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:42.674 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:42.674 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.674 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.674 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.674 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:42.674 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.674 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.674 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.674 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:42.674 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:42.934 17:39:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:42.934 00:04:42.934 real 0m0.170s 00:04:42.934 user 0m0.096s 00:04:42.934 sys 0m0.032s 00:04:42.934 ************************************ 00:04:42.934 END TEST rpc_plugins 00:04:42.934 ************************************ 00:04:42.934 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.934 17:39:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 17:39:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:42.934 17:39:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.934 17:39:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.934 17:39:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 ************************************ 00:04:42.934 START TEST rpc_trace_cmd_test 00:04:42.934 ************************************ 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:42.934 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57103", 00:04:42.934 "tpoint_group_mask": "0x8", 00:04:42.934 "iscsi_conn": { 00:04:42.934 "mask": "0x2", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "scsi": { 00:04:42.934 "mask": "0x4", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "bdev": { 00:04:42.934 "mask": "0x8", 00:04:42.934 "tpoint_mask": "0xffffffffffffffff" 00:04:42.934 }, 00:04:42.934 "nvmf_rdma": { 00:04:42.934 "mask": "0x10", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "nvmf_tcp": { 00:04:42.934 "mask": "0x20", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "ftl": { 00:04:42.934 "mask": "0x40", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "blobfs": { 00:04:42.934 "mask": "0x80", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "dsa": { 00:04:42.934 "mask": "0x200", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "thread": { 00:04:42.934 "mask": "0x400", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "nvme_pcie": { 00:04:42.934 "mask": "0x800", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "iaa": { 00:04:42.934 "mask": "0x1000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "nvme_tcp": { 00:04:42.934 "mask": "0x2000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "bdev_nvme": { 00:04:42.934 "mask": "0x4000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "sock": { 00:04:42.934 "mask": "0x8000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "blob": { 00:04:42.934 "mask": "0x10000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "bdev_raid": { 00:04:42.934 "mask": "0x20000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 }, 00:04:42.934 "scheduler": { 00:04:42.934 "mask": "0x40000", 00:04:42.934 "tpoint_mask": "0x0" 00:04:42.934 } 00:04:42.934 }' 00:04:42.934 17:39:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:42.934 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:42.934 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:42.934 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:42.934 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.195 ************************************ 00:04:43.195 END TEST rpc_trace_cmd_test 00:04:43.195 ************************************ 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.195 00:04:43.195 real 0m0.255s 00:04:43.195 user 0m0.210s 00:04:43.195 sys 0m0.036s 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.195 17:39:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.195 17:39:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.195 17:39:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.195 17:39:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.195 17:39:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.195 17:39:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.195 17:39:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.195 ************************************ 00:04:43.195 START TEST rpc_daemon_integrity 00:04:43.195 ************************************ 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.195 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.454 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.454 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.454 { 00:04:43.454 "name": "Malloc2", 00:04:43.454 "aliases": [ 00:04:43.454 "fe341970-c359-438d-8abf-80803fa45ef7" 00:04:43.454 ], 00:04:43.454 "product_name": "Malloc disk", 00:04:43.454 "block_size": 512, 00:04:43.454 "num_blocks": 16384, 00:04:43.454 "uuid": "fe341970-c359-438d-8abf-80803fa45ef7", 00:04:43.454 "assigned_rate_limits": { 00:04:43.454 "rw_ios_per_sec": 0, 00:04:43.454 "rw_mbytes_per_sec": 0, 00:04:43.454 "r_mbytes_per_sec": 0, 00:04:43.454 "w_mbytes_per_sec": 0 00:04:43.454 }, 00:04:43.454 "claimed": false, 00:04:43.454 "zoned": false, 00:04:43.454 "supported_io_types": { 00:04:43.454 "read": true, 00:04:43.454 "write": true, 00:04:43.454 "unmap": true, 00:04:43.454 "flush": true, 00:04:43.454 "reset": true, 00:04:43.454 "nvme_admin": false, 00:04:43.454 "nvme_io": false, 00:04:43.454 "nvme_io_md": false, 00:04:43.454 "write_zeroes": true, 00:04:43.454 "zcopy": true, 00:04:43.454 "get_zone_info": false, 00:04:43.454 "zone_management": false, 00:04:43.454 "zone_append": false, 00:04:43.454 "compare": false, 00:04:43.454 "compare_and_write": false, 00:04:43.454 "abort": true, 00:04:43.454 "seek_hole": false, 00:04:43.454 "seek_data": false, 00:04:43.454 "copy": true, 00:04:43.454 "nvme_iov_md": false 00:04:43.454 }, 00:04:43.454 "memory_domains": [ 00:04:43.454 { 00:04:43.454 "dma_device_id": "system", 00:04:43.454 "dma_device_type": 1 00:04:43.454 }, 00:04:43.454 { 00:04:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.454 "dma_device_type": 2 00:04:43.455 } 00:04:43.455 ], 00:04:43.455 "driver_specific": {} 00:04:43.455 } 00:04:43.455 ]' 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.455 [2024-11-20 17:39:10.433942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:43.455 [2024-11-20 17:39:10.434197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.455 [2024-11-20 17:39:10.434237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:43.455 [2024-11-20 17:39:10.434253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.455 [2024-11-20 17:39:10.437576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.455 Passthru0 00:04:43.455 [2024-11-20 17:39:10.437733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.455 { 00:04:43.455 "name": "Malloc2", 00:04:43.455 "aliases": [ 00:04:43.455 "fe341970-c359-438d-8abf-80803fa45ef7" 00:04:43.455 ], 00:04:43.455 "product_name": "Malloc disk", 00:04:43.455 "block_size": 512, 00:04:43.455 "num_blocks": 16384, 00:04:43.455 "uuid": "fe341970-c359-438d-8abf-80803fa45ef7", 00:04:43.455 "assigned_rate_limits": { 00:04:43.455 "rw_ios_per_sec": 0, 00:04:43.455 "rw_mbytes_per_sec": 0, 00:04:43.455 "r_mbytes_per_sec": 0, 00:04:43.455 "w_mbytes_per_sec": 0 00:04:43.455 }, 00:04:43.455 "claimed": true, 00:04:43.455 "claim_type": "exclusive_write", 00:04:43.455 "zoned": false, 00:04:43.455 "supported_io_types": { 00:04:43.455 "read": true, 00:04:43.455 "write": true, 00:04:43.455 "unmap": true, 00:04:43.455 "flush": true, 00:04:43.455 "reset": true, 00:04:43.455 "nvme_admin": false, 00:04:43.455 "nvme_io": false, 00:04:43.455 "nvme_io_md": false, 00:04:43.455 "write_zeroes": true, 00:04:43.455 "zcopy": true, 00:04:43.455 "get_zone_info": false, 00:04:43.455 "zone_management": false, 00:04:43.455 "zone_append": false, 00:04:43.455 "compare": false, 00:04:43.455 "compare_and_write": false, 00:04:43.455 "abort": true, 00:04:43.455 "seek_hole": false, 00:04:43.455 "seek_data": false, 00:04:43.455 "copy": true, 00:04:43.455 "nvme_iov_md": false 00:04:43.455 }, 00:04:43.455 "memory_domains": [ 00:04:43.455 { 00:04:43.455 "dma_device_id": "system", 00:04:43.455 "dma_device_type": 1 00:04:43.455 }, 00:04:43.455 { 00:04:43.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.455 "dma_device_type": 2 00:04:43.455 } 00:04:43.455 ], 00:04:43.455 "driver_specific": {} 00:04:43.455 }, 00:04:43.455 { 00:04:43.455 "name": "Passthru0", 00:04:43.455 "aliases": [ 00:04:43.455 "20782f6e-0797-5c74-9a53-574e0a187df4" 00:04:43.455 ], 00:04:43.455 "product_name": "passthru", 00:04:43.455 "block_size": 512, 00:04:43.455 "num_blocks": 16384, 00:04:43.455 "uuid": "20782f6e-0797-5c74-9a53-574e0a187df4", 00:04:43.455 "assigned_rate_limits": { 00:04:43.455 "rw_ios_per_sec": 0, 00:04:43.455 "rw_mbytes_per_sec": 0, 00:04:43.455 "r_mbytes_per_sec": 0, 00:04:43.455 "w_mbytes_per_sec": 0 00:04:43.455 }, 00:04:43.455 "claimed": false, 00:04:43.455 "zoned": false, 00:04:43.455 "supported_io_types": { 00:04:43.455 "read": true, 00:04:43.455 "write": true, 00:04:43.455 "unmap": true, 00:04:43.455 "flush": true, 00:04:43.455 "reset": true, 00:04:43.455 "nvme_admin": false, 00:04:43.455 "nvme_io": false, 00:04:43.455 "nvme_io_md": false, 00:04:43.455 "write_zeroes": true, 00:04:43.455 "zcopy": true, 00:04:43.455 "get_zone_info": false, 00:04:43.455 "zone_management": false, 00:04:43.455 "zone_append": false, 00:04:43.455 "compare": false, 00:04:43.455 "compare_and_write": false, 00:04:43.455 "abort": true, 00:04:43.455 "seek_hole": false, 00:04:43.455 "seek_data": false, 00:04:43.455 "copy": true, 00:04:43.455 "nvme_iov_md": false 00:04:43.455 }, 00:04:43.455 "memory_domains": [ 00:04:43.455 { 00:04:43.455 "dma_device_id": "system", 00:04:43.455 "dma_device_type": 1 00:04:43.455 }, 00:04:43.455 { 00:04:43.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.455 "dma_device_type": 2 00:04:43.455 } 00:04:43.455 ], 00:04:43.455 "driver_specific": { 00:04:43.455 "passthru": { 00:04:43.455 "name": "Passthru0", 00:04:43.455 "base_bdev_name": "Malloc2" 00:04:43.455 } 00:04:43.455 } 00:04:43.455 } 00:04:43.455 ]' 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.455 00:04:43.455 real 0m0.343s 00:04:43.455 user 0m0.189s 00:04:43.455 sys 0m0.053s 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.455 17:39:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.455 ************************************ 00:04:43.455 END TEST rpc_daemon_integrity 00:04:43.455 ************************************ 00:04:43.715 17:39:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:43.715 17:39:10 rpc -- rpc/rpc.sh@84 -- # killprocess 57103 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@954 -- # '[' -z 57103 ']' 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@958 -- # kill -0 57103 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57103 00:04:43.715 killing process with pid 57103 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57103' 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@973 -- # kill 57103 00:04:43.715 17:39:10 rpc -- common/autotest_common.sh@978 -- # wait 57103 00:04:47.007 00:04:47.007 real 0m6.497s 00:04:47.007 user 0m6.871s 00:04:47.007 sys 0m1.161s 00:04:47.007 17:39:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.007 17:39:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.007 ************************************ 00:04:47.007 END TEST rpc 00:04:47.007 ************************************ 00:04:47.007 17:39:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.007 17:39:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.007 17:39:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.007 17:39:13 -- common/autotest_common.sh@10 -- # set +x 00:04:47.007 ************************************ 00:04:47.007 START TEST skip_rpc 00:04:47.007 ************************************ 00:04:47.007 17:39:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.007 * Looking for test storage... 00:04:47.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.007 17:39:14 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.007 17:39:14 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.007 17:39:14 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.007 17:39:14 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.007 17:39:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.266 17:39:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.266 --rc genhtml_branch_coverage=1 00:04:47.266 --rc genhtml_function_coverage=1 00:04:47.266 --rc genhtml_legend=1 00:04:47.266 --rc geninfo_all_blocks=1 00:04:47.266 --rc geninfo_unexecuted_blocks=1 00:04:47.266 00:04:47.266 ' 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.266 --rc genhtml_branch_coverage=1 00:04:47.266 --rc genhtml_function_coverage=1 00:04:47.266 --rc genhtml_legend=1 00:04:47.266 --rc geninfo_all_blocks=1 00:04:47.266 --rc geninfo_unexecuted_blocks=1 00:04:47.266 00:04:47.266 ' 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.266 --rc genhtml_branch_coverage=1 00:04:47.266 --rc genhtml_function_coverage=1 00:04:47.266 --rc genhtml_legend=1 00:04:47.266 --rc geninfo_all_blocks=1 00:04:47.266 --rc geninfo_unexecuted_blocks=1 00:04:47.266 00:04:47.266 ' 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.266 --rc genhtml_branch_coverage=1 00:04:47.266 --rc genhtml_function_coverage=1 00:04:47.266 --rc genhtml_legend=1 00:04:47.266 --rc geninfo_all_blocks=1 00:04:47.266 --rc geninfo_unexecuted_blocks=1 00:04:47.266 00:04:47.266 ' 00:04:47.266 17:39:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.266 17:39:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.266 17:39:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.266 17:39:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.266 ************************************ 00:04:47.266 START TEST skip_rpc 00:04:47.266 ************************************ 00:04:47.266 17:39:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:47.266 17:39:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57343 00:04:47.267 17:39:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.267 17:39:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:47.267 17:39:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:47.267 [2024-11-20 17:39:14.316860] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:47.267 [2024-11-20 17:39:14.317119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57343 ] 00:04:47.528 [2024-11-20 17:39:14.500357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.528 [2024-11-20 17:39:14.672646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.806 17:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.806 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:52.806 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57343 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57343 ']' 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57343 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57343 00:04:52.807 killing process with pid 57343 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57343' 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57343 00:04:52.807 17:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57343 00:04:55.424 00:04:55.424 real 0m8.203s 00:04:55.424 user 0m7.548s 00:04:55.424 sys 0m0.559s 00:04:55.424 17:39:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.424 ************************************ 00:04:55.424 END TEST skip_rpc 00:04:55.424 ************************************ 00:04:55.424 17:39:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.424 17:39:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:55.424 17:39:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.424 17:39:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.424 17:39:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.424 ************************************ 00:04:55.424 START TEST skip_rpc_with_json 00:04:55.424 ************************************ 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57458 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57458 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57458 ']' 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.424 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.424 [2024-11-20 17:39:22.572467] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:55.424 [2024-11-20 17:39:22.572692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57458 ] 00:04:55.683 [2024-11-20 17:39:22.747189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.950 [2024-11-20 17:39:22.910626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.327 [2024-11-20 17:39:24.108572] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:57.327 request: 00:04:57.327 { 00:04:57.327 "trtype": "tcp", 00:04:57.327 "method": "nvmf_get_transports", 00:04:57.327 "req_id": 1 00:04:57.327 } 00:04:57.327 Got JSON-RPC error response 00:04:57.327 response: 00:04:57.327 { 00:04:57.327 "code": -19, 00:04:57.327 "message": "No such device" 00:04:57.327 } 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.327 [2024-11-20 17:39:24.124763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.327 17:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.327 { 00:04:57.327 "subsystems": [ 00:04:57.327 { 00:04:57.327 "subsystem": "fsdev", 00:04:57.327 "config": [ 00:04:57.327 { 00:04:57.327 "method": "fsdev_set_opts", 00:04:57.327 "params": { 00:04:57.327 "fsdev_io_pool_size": 65535, 00:04:57.327 "fsdev_io_cache_size": 256 00:04:57.327 } 00:04:57.327 } 00:04:57.327 ] 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "subsystem": "keyring", 00:04:57.327 "config": [] 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "subsystem": "iobuf", 00:04:57.327 "config": [ 00:04:57.327 { 00:04:57.327 "method": "iobuf_set_options", 00:04:57.327 "params": { 00:04:57.327 "small_pool_count": 8192, 00:04:57.327 "large_pool_count": 1024, 00:04:57.327 "small_bufsize": 8192, 00:04:57.327 "large_bufsize": 135168, 00:04:57.327 "enable_numa": false 00:04:57.327 } 00:04:57.327 } 00:04:57.327 ] 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "subsystem": "sock", 00:04:57.327 "config": [ 00:04:57.327 { 00:04:57.327 "method": "sock_set_default_impl", 00:04:57.327 "params": { 00:04:57.327 "impl_name": "posix" 00:04:57.327 } 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "method": "sock_impl_set_options", 00:04:57.327 "params": { 00:04:57.327 "impl_name": "ssl", 00:04:57.327 "recv_buf_size": 4096, 00:04:57.327 "send_buf_size": 4096, 00:04:57.327 "enable_recv_pipe": true, 00:04:57.327 "enable_quickack": false, 00:04:57.327 "enable_placement_id": 0, 00:04:57.327 "enable_zerocopy_send_server": true, 00:04:57.327 "enable_zerocopy_send_client": false, 00:04:57.327 "zerocopy_threshold": 0, 00:04:57.327 "tls_version": 0, 00:04:57.327 "enable_ktls": false 00:04:57.327 } 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "method": "sock_impl_set_options", 00:04:57.327 "params": { 00:04:57.327 "impl_name": "posix", 00:04:57.327 "recv_buf_size": 2097152, 00:04:57.327 "send_buf_size": 2097152, 00:04:57.327 "enable_recv_pipe": true, 00:04:57.327 "enable_quickack": false, 00:04:57.327 "enable_placement_id": 0, 00:04:57.327 "enable_zerocopy_send_server": true, 00:04:57.327 "enable_zerocopy_send_client": false, 00:04:57.327 "zerocopy_threshold": 0, 00:04:57.327 "tls_version": 0, 00:04:57.327 "enable_ktls": false 00:04:57.327 } 00:04:57.327 } 00:04:57.327 ] 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "subsystem": "vmd", 00:04:57.327 "config": [] 00:04:57.327 }, 00:04:57.327 { 00:04:57.327 "subsystem": "accel", 00:04:57.327 "config": [ 00:04:57.327 { 00:04:57.328 "method": "accel_set_options", 00:04:57.328 "params": { 00:04:57.328 "small_cache_size": 128, 00:04:57.328 "large_cache_size": 16, 00:04:57.328 "task_count": 2048, 00:04:57.328 "sequence_count": 2048, 00:04:57.328 "buf_count": 2048 00:04:57.328 } 00:04:57.328 } 00:04:57.328 ] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "bdev", 00:04:57.328 "config": [ 00:04:57.328 { 00:04:57.328 "method": "bdev_set_options", 00:04:57.328 "params": { 00:04:57.328 "bdev_io_pool_size": 65535, 00:04:57.328 "bdev_io_cache_size": 256, 00:04:57.328 "bdev_auto_examine": true, 00:04:57.328 "iobuf_small_cache_size": 128, 00:04:57.328 "iobuf_large_cache_size": 16 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "bdev_raid_set_options", 00:04:57.328 "params": { 00:04:57.328 "process_window_size_kb": 1024, 00:04:57.328 "process_max_bandwidth_mb_sec": 0 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "bdev_iscsi_set_options", 00:04:57.328 "params": { 00:04:57.328 "timeout_sec": 30 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "bdev_nvme_set_options", 00:04:57.328 "params": { 00:04:57.328 "action_on_timeout": "none", 00:04:57.328 "timeout_us": 0, 00:04:57.328 "timeout_admin_us": 0, 00:04:57.328 "keep_alive_timeout_ms": 10000, 00:04:57.328 "arbitration_burst": 0, 00:04:57.328 "low_priority_weight": 0, 00:04:57.328 "medium_priority_weight": 0, 00:04:57.328 "high_priority_weight": 0, 00:04:57.328 "nvme_adminq_poll_period_us": 10000, 00:04:57.328 "nvme_ioq_poll_period_us": 0, 00:04:57.328 "io_queue_requests": 0, 00:04:57.328 "delay_cmd_submit": true, 00:04:57.328 "transport_retry_count": 4, 00:04:57.328 "bdev_retry_count": 3, 00:04:57.328 "transport_ack_timeout": 0, 00:04:57.328 "ctrlr_loss_timeout_sec": 0, 00:04:57.328 "reconnect_delay_sec": 0, 00:04:57.328 "fast_io_fail_timeout_sec": 0, 00:04:57.328 "disable_auto_failback": false, 00:04:57.328 "generate_uuids": false, 00:04:57.328 "transport_tos": 0, 00:04:57.328 "nvme_error_stat": false, 00:04:57.328 "rdma_srq_size": 0, 00:04:57.328 "io_path_stat": false, 00:04:57.328 "allow_accel_sequence": false, 00:04:57.328 "rdma_max_cq_size": 0, 00:04:57.328 "rdma_cm_event_timeout_ms": 0, 00:04:57.328 "dhchap_digests": [ 00:04:57.328 "sha256", 00:04:57.328 "sha384", 00:04:57.328 "sha512" 00:04:57.328 ], 00:04:57.328 "dhchap_dhgroups": [ 00:04:57.328 "null", 00:04:57.328 "ffdhe2048", 00:04:57.328 "ffdhe3072", 00:04:57.328 "ffdhe4096", 00:04:57.328 "ffdhe6144", 00:04:57.328 "ffdhe8192" 00:04:57.328 ] 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "bdev_nvme_set_hotplug", 00:04:57.328 "params": { 00:04:57.328 "period_us": 100000, 00:04:57.328 "enable": false 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "bdev_wait_for_examine" 00:04:57.328 } 00:04:57.328 ] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "scsi", 00:04:57.328 "config": null 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "scheduler", 00:04:57.328 "config": [ 00:04:57.328 { 00:04:57.328 "method": "framework_set_scheduler", 00:04:57.328 "params": { 00:04:57.328 "name": "static" 00:04:57.328 } 00:04:57.328 } 00:04:57.328 ] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "vhost_scsi", 00:04:57.328 "config": [] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "vhost_blk", 00:04:57.328 "config": [] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "ublk", 00:04:57.328 "config": [] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "nbd", 00:04:57.328 "config": [] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "nvmf", 00:04:57.328 "config": [ 00:04:57.328 { 00:04:57.328 "method": "nvmf_set_config", 00:04:57.328 "params": { 00:04:57.328 "discovery_filter": "match_any", 00:04:57.328 "admin_cmd_passthru": { 00:04:57.328 "identify_ctrlr": false 00:04:57.328 }, 00:04:57.328 "dhchap_digests": [ 00:04:57.328 "sha256", 00:04:57.328 "sha384", 00:04:57.328 "sha512" 00:04:57.328 ], 00:04:57.328 "dhchap_dhgroups": [ 00:04:57.328 "null", 00:04:57.328 "ffdhe2048", 00:04:57.328 "ffdhe3072", 00:04:57.328 "ffdhe4096", 00:04:57.328 "ffdhe6144", 00:04:57.328 "ffdhe8192" 00:04:57.328 ] 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "nvmf_set_max_subsystems", 00:04:57.328 "params": { 00:04:57.328 "max_subsystems": 1024 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "nvmf_set_crdt", 00:04:57.328 "params": { 00:04:57.328 "crdt1": 0, 00:04:57.328 "crdt2": 0, 00:04:57.328 "crdt3": 0 00:04:57.328 } 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "method": "nvmf_create_transport", 00:04:57.328 "params": { 00:04:57.328 "trtype": "TCP", 00:04:57.328 "max_queue_depth": 128, 00:04:57.328 "max_io_qpairs_per_ctrlr": 127, 00:04:57.328 "in_capsule_data_size": 4096, 00:04:57.328 "max_io_size": 131072, 00:04:57.328 "io_unit_size": 131072, 00:04:57.328 "max_aq_depth": 128, 00:04:57.328 "num_shared_buffers": 511, 00:04:57.328 "buf_cache_size": 4294967295, 00:04:57.328 "dif_insert_or_strip": false, 00:04:57.328 "zcopy": false, 00:04:57.328 "c2h_success": true, 00:04:57.328 "sock_priority": 0, 00:04:57.328 "abort_timeout_sec": 1, 00:04:57.328 "ack_timeout": 0, 00:04:57.328 "data_wr_pool_size": 0 00:04:57.328 } 00:04:57.328 } 00:04:57.328 ] 00:04:57.328 }, 00:04:57.328 { 00:04:57.328 "subsystem": "iscsi", 00:04:57.328 "config": [ 00:04:57.328 { 00:04:57.328 "method": "iscsi_set_options", 00:04:57.328 "params": { 00:04:57.328 "node_base": "iqn.2016-06.io.spdk", 00:04:57.328 "max_sessions": 128, 00:04:57.328 "max_connections_per_session": 2, 00:04:57.328 "max_queue_depth": 64, 00:04:57.328 "default_time2wait": 2, 00:04:57.328 "default_time2retain": 20, 00:04:57.328 "first_burst_length": 8192, 00:04:57.328 "immediate_data": true, 00:04:57.328 "allow_duplicated_isid": false, 00:04:57.328 "error_recovery_level": 0, 00:04:57.328 "nop_timeout": 60, 00:04:57.328 "nop_in_interval": 30, 00:04:57.328 "disable_chap": false, 00:04:57.328 "require_chap": false, 00:04:57.328 "mutual_chap": false, 00:04:57.328 "chap_group": 0, 00:04:57.328 "max_large_datain_per_connection": 64, 00:04:57.328 "max_r2t_per_connection": 4, 00:04:57.328 "pdu_pool_size": 36864, 00:04:57.328 "immediate_data_pool_size": 16384, 00:04:57.328 "data_out_pool_size": 2048 00:04:57.328 } 00:04:57.328 } 00:04:57.328 ] 00:04:57.328 } 00:04:57.328 ] 00:04:57.328 } 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57458 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57458 ']' 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57458 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57458 00:04:57.328 killing process with pid 57458 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57458' 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57458 00:04:57.328 17:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57458 00:05:00.621 17:39:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.621 17:39:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57525 00:05:00.621 17:39:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57525 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57525 ']' 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57525 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57525 00:05:05.896 killing process with pid 57525 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57525' 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57525 00:05:05.896 17:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57525 00:05:08.489 17:39:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:08.489 17:39:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:08.489 ************************************ 00:05:08.489 END TEST skip_rpc_with_json 00:05:08.489 ************************************ 00:05:08.489 00:05:08.489 real 0m13.170s 00:05:08.489 user 0m12.292s 00:05:08.489 sys 0m1.219s 00:05:08.489 17:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.489 17:39:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.749 17:39:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.749 17:39:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.749 17:39:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.749 17:39:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.749 ************************************ 00:05:08.749 START TEST skip_rpc_with_delay 00:05:08.749 ************************************ 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.749 [2024-11-20 17:39:35.808162] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.749 00:05:08.749 real 0m0.176s 00:05:08.749 user 0m0.095s 00:05:08.749 sys 0m0.080s 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.749 ************************************ 00:05:08.749 END TEST skip_rpc_with_delay 00:05:08.749 ************************************ 00:05:08.749 17:39:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.008 17:39:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.008 17:39:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.008 17:39:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.008 17:39:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.008 17:39:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.008 17:39:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.008 ************************************ 00:05:09.008 START TEST exit_on_failed_rpc_init 00:05:09.008 ************************************ 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57664 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57664 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57664 ']' 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.008 17:39:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.008 [2024-11-20 17:39:36.051373] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:09.008 [2024-11-20 17:39:36.051500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57664 ] 00:05:09.268 [2024-11-20 17:39:36.207832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.268 [2024-11-20 17:39:36.365159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:10.647 17:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.647 [2024-11-20 17:39:37.580303] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:10.647 [2024-11-20 17:39:37.580514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57688 ] 00:05:10.647 [2024-11-20 17:39:37.742270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.914 [2024-11-20 17:39:37.905701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.914 [2024-11-20 17:39:37.905829] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:10.914 [2024-11-20 17:39:37.905844] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:10.914 [2024-11-20 17:39:37.905859] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57664 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57664 ']' 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57664 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57664 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.184 killing process with pid 57664 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57664' 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57664 00:05:11.184 17:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57664 00:05:14.476 ************************************ 00:05:14.476 END TEST exit_on_failed_rpc_init 00:05:14.476 ************************************ 00:05:14.476 00:05:14.476 real 0m5.482s 00:05:14.476 user 0m5.768s 00:05:14.476 sys 0m0.732s 00:05:14.476 17:39:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.476 17:39:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.476 17:39:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.476 ************************************ 00:05:14.476 END TEST skip_rpc 00:05:14.476 ************************************ 00:05:14.476 00:05:14.476 real 0m27.519s 00:05:14.476 user 0m25.907s 00:05:14.476 sys 0m2.893s 00:05:14.476 17:39:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.476 17:39:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.476 17:39:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.476 17:39:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.476 17:39:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.476 17:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:14.476 ************************************ 00:05:14.476 START TEST rpc_client 00:05:14.476 ************************************ 00:05:14.476 17:39:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.476 * Looking for test storage... 00:05:14.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.736 17:39:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.736 --rc genhtml_branch_coverage=1 00:05:14.736 --rc genhtml_function_coverage=1 00:05:14.736 --rc genhtml_legend=1 00:05:14.736 --rc geninfo_all_blocks=1 00:05:14.736 --rc geninfo_unexecuted_blocks=1 00:05:14.736 00:05:14.736 ' 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.736 --rc genhtml_branch_coverage=1 00:05:14.736 --rc genhtml_function_coverage=1 00:05:14.736 --rc genhtml_legend=1 00:05:14.736 --rc geninfo_all_blocks=1 00:05:14.736 --rc geninfo_unexecuted_blocks=1 00:05:14.736 00:05:14.736 ' 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.736 --rc genhtml_branch_coverage=1 00:05:14.736 --rc genhtml_function_coverage=1 00:05:14.736 --rc genhtml_legend=1 00:05:14.736 --rc geninfo_all_blocks=1 00:05:14.736 --rc geninfo_unexecuted_blocks=1 00:05:14.736 00:05:14.736 ' 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.736 --rc genhtml_branch_coverage=1 00:05:14.736 --rc genhtml_function_coverage=1 00:05:14.736 --rc genhtml_legend=1 00:05:14.736 --rc geninfo_all_blocks=1 00:05:14.736 --rc geninfo_unexecuted_blocks=1 00:05:14.736 00:05:14.736 ' 00:05:14.736 17:39:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:14.736 OK 00:05:14.736 17:39:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.736 ************************************ 00:05:14.736 END TEST rpc_client 00:05:14.736 ************************************ 00:05:14.736 00:05:14.736 real 0m0.314s 00:05:14.736 user 0m0.181s 00:05:14.736 sys 0m0.144s 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.736 17:39:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.736 17:39:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.736 17:39:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.736 17:39:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.736 17:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:14.736 ************************************ 00:05:14.736 START TEST json_config 00:05:14.736 ************************************ 00:05:14.736 17:39:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.997 17:39:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.997 17:39:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.997 17:39:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.997 17:39:42 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.997 17:39:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.997 17:39:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.997 17:39:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.997 17:39:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.997 17:39:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.997 17:39:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:14.997 17:39:42 json_config -- scripts/common.sh@345 -- # : 1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.997 17:39:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.997 17:39:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@353 -- # local d=1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.997 17:39:42 json_config -- scripts/common.sh@355 -- # echo 1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.997 17:39:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@353 -- # local d=2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.997 17:39:42 json_config -- scripts/common.sh@355 -- # echo 2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.997 17:39:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.997 17:39:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.997 17:39:42 json_config -- scripts/common.sh@368 -- # return 0 00:05:14.997 17:39:42 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.997 17:39:42 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.997 --rc genhtml_branch_coverage=1 00:05:14.997 --rc genhtml_function_coverage=1 00:05:14.997 --rc genhtml_legend=1 00:05:14.997 --rc geninfo_all_blocks=1 00:05:14.997 --rc geninfo_unexecuted_blocks=1 00:05:14.997 00:05:14.997 ' 00:05:14.997 17:39:42 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.997 --rc genhtml_branch_coverage=1 00:05:14.997 --rc genhtml_function_coverage=1 00:05:14.997 --rc genhtml_legend=1 00:05:14.997 --rc geninfo_all_blocks=1 00:05:14.997 --rc geninfo_unexecuted_blocks=1 00:05:14.997 00:05:14.997 ' 00:05:14.997 17:39:42 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.997 --rc genhtml_branch_coverage=1 00:05:14.997 --rc genhtml_function_coverage=1 00:05:14.997 --rc genhtml_legend=1 00:05:14.997 --rc geninfo_all_blocks=1 00:05:14.997 --rc geninfo_unexecuted_blocks=1 00:05:14.997 00:05:14.997 ' 00:05:14.997 17:39:42 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.997 --rc genhtml_branch_coverage=1 00:05:14.997 --rc genhtml_function_coverage=1 00:05:14.997 --rc genhtml_legend=1 00:05:14.997 --rc geninfo_all_blocks=1 00:05:14.997 --rc geninfo_unexecuted_blocks=1 00:05:14.997 00:05:14.997 ' 00:05:14.997 17:39:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6aee6518-a1c0-4f87-b451-1b81ad9dbce6 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6aee6518-a1c0-4f87-b451-1b81ad9dbce6 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.997 17:39:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.997 17:39:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.997 17:39:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.997 17:39:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.997 17:39:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.997 17:39:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.997 17:39:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.997 17:39:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.997 17:39:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.997 17:39:42 json_config -- nvmf/common.sh@51 -- # : 0 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.998 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.998 17:39:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:14.998 WARNING: No tests are enabled so not running JSON configuration tests 00:05:14.998 17:39:42 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:14.998 00:05:14.998 real 0m0.244s 00:05:14.998 user 0m0.157s 00:05:14.998 sys 0m0.092s 00:05:14.998 17:39:42 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.998 17:39:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.998 ************************************ 00:05:14.998 END TEST json_config 00:05:14.998 ************************************ 00:05:15.324 17:39:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:15.324 17:39:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.324 17:39:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.324 17:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:15.324 ************************************ 00:05:15.324 START TEST json_config_extra_key 00:05:15.324 ************************************ 00:05:15.324 17:39:42 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:15.324 17:39:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.324 17:39:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.324 17:39:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.324 17:39:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.324 17:39:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.324 17:39:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.324 17:39:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:15.325 17:39:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.325 17:39:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.325 --rc genhtml_branch_coverage=1 00:05:15.325 --rc genhtml_function_coverage=1 00:05:15.325 --rc genhtml_legend=1 00:05:15.325 --rc geninfo_all_blocks=1 00:05:15.325 --rc geninfo_unexecuted_blocks=1 00:05:15.325 00:05:15.325 ' 00:05:15.325 17:39:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.325 --rc genhtml_branch_coverage=1 00:05:15.325 --rc genhtml_function_coverage=1 00:05:15.325 --rc genhtml_legend=1 00:05:15.325 --rc geninfo_all_blocks=1 00:05:15.325 --rc geninfo_unexecuted_blocks=1 00:05:15.325 00:05:15.325 ' 00:05:15.325 17:39:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.325 --rc genhtml_branch_coverage=1 00:05:15.325 --rc genhtml_function_coverage=1 00:05:15.325 --rc genhtml_legend=1 00:05:15.325 --rc geninfo_all_blocks=1 00:05:15.325 --rc geninfo_unexecuted_blocks=1 00:05:15.325 00:05:15.325 ' 00:05:15.325 17:39:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.325 --rc genhtml_branch_coverage=1 00:05:15.325 --rc genhtml_function_coverage=1 00:05:15.325 --rc genhtml_legend=1 00:05:15.325 --rc geninfo_all_blocks=1 00:05:15.325 --rc geninfo_unexecuted_blocks=1 00:05:15.325 00:05:15.325 ' 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6aee6518-a1c0-4f87-b451-1b81ad9dbce6 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6aee6518-a1c0-4f87-b451-1b81ad9dbce6 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.325 17:39:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.325 17:39:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.325 17:39:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.325 17:39:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.325 17:39:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.325 17:39:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.325 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.325 17:39:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.325 INFO: launching applications... 00:05:15.325 17:39:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57909 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:15.325 17:39:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.325 Waiting for target to run... 00:05:15.326 17:39:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57909 /var/tmp/spdk_tgt.sock 00:05:15.326 17:39:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57909 ']' 00:05:15.326 17:39:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.326 17:39:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.326 17:39:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.326 17:39:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.326 17:39:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.585 [2024-11-20 17:39:42.563369] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:15.585 [2024-11-20 17:39:42.563591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57909 ] 00:05:15.845 [2024-11-20 17:39:42.944303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.105 [2024-11-20 17:39:43.117231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.042 17:39:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.042 17:39:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:17.042 00:05:17.042 17:39:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:17.042 INFO: shutting down applications... 00:05:17.042 17:39:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57909 ]] 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57909 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:17.042 17:39:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.611 17:39:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.611 17:39:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.611 17:39:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:17.611 17:39:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.871 17:39:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.871 17:39:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.871 17:39:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:17.871 17:39:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.438 17:39:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.438 17:39:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.438 17:39:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:18.438 17:39:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.014 17:39:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.014 17:39:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.014 17:39:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:19.014 17:39:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.583 17:39:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.583 17:39:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.583 17:39:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:19.583 17:39:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.152 17:39:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.152 17:39:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.152 17:39:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:20.152 17:39:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.412 17:39:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.412 17:39:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.412 17:39:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:20.412 17:39:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57909 00:05:20.980 SPDK target shutdown done 00:05:20.980 Success 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.980 17:39:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.980 17:39:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.980 00:05:20.980 real 0m5.867s 00:05:20.980 user 0m5.177s 00:05:20.980 sys 0m0.634s 00:05:20.980 17:39:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.980 17:39:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.980 ************************************ 00:05:20.980 END TEST json_config_extra_key 00:05:20.980 ************************************ 00:05:20.980 17:39:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.980 17:39:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.980 17:39:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.980 17:39:48 -- common/autotest_common.sh@10 -- # set +x 00:05:20.980 ************************************ 00:05:20.980 START TEST alias_rpc 00:05:20.980 ************************************ 00:05:20.980 17:39:48 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.239 * Looking for test storage... 00:05:21.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.239 17:39:48 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.239 17:39:48 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.239 17:39:48 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.239 17:39:48 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.239 17:39:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.240 17:39:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.240 --rc genhtml_branch_coverage=1 00:05:21.240 --rc genhtml_function_coverage=1 00:05:21.240 --rc genhtml_legend=1 00:05:21.240 --rc geninfo_all_blocks=1 00:05:21.240 --rc geninfo_unexecuted_blocks=1 00:05:21.240 00:05:21.240 ' 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.240 --rc genhtml_branch_coverage=1 00:05:21.240 --rc genhtml_function_coverage=1 00:05:21.240 --rc genhtml_legend=1 00:05:21.240 --rc geninfo_all_blocks=1 00:05:21.240 --rc geninfo_unexecuted_blocks=1 00:05:21.240 00:05:21.240 ' 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.240 --rc genhtml_branch_coverage=1 00:05:21.240 --rc genhtml_function_coverage=1 00:05:21.240 --rc genhtml_legend=1 00:05:21.240 --rc geninfo_all_blocks=1 00:05:21.240 --rc geninfo_unexecuted_blocks=1 00:05:21.240 00:05:21.240 ' 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.240 --rc genhtml_branch_coverage=1 00:05:21.240 --rc genhtml_function_coverage=1 00:05:21.240 --rc genhtml_legend=1 00:05:21.240 --rc geninfo_all_blocks=1 00:05:21.240 --rc geninfo_unexecuted_blocks=1 00:05:21.240 00:05:21.240 ' 00:05:21.240 17:39:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.240 17:39:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.240 17:39:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58035 00:05:21.240 17:39:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58035 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58035 ']' 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.240 17:39:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.499 [2024-11-20 17:39:48.527213] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:21.499 [2024-11-20 17:39:48.527540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:05:21.759 [2024-11-20 17:39:48.710635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.759 [2024-11-20 17:39:48.867842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.135 17:39:49 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.135 17:39:49 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.136 17:39:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.136 17:39:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58035 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58035 ']' 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58035 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58035 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58035' 00:05:23.136 killing process with pid 58035 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@973 -- # kill 58035 00:05:23.136 17:39:50 alias_rpc -- common/autotest_common.sh@978 -- # wait 58035 00:05:26.431 00:05:26.431 real 0m4.902s 00:05:26.431 user 0m4.739s 00:05:26.431 sys 0m0.800s 00:05:26.431 17:39:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.431 ************************************ 00:05:26.431 END TEST alias_rpc 00:05:26.431 ************************************ 00:05:26.431 17:39:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.431 17:39:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.431 17:39:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.431 17:39:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.431 17:39:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.431 17:39:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.431 ************************************ 00:05:26.431 START TEST spdkcli_tcp 00:05:26.431 ************************************ 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.431 * Looking for test storage... 00:05:26.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.431 17:39:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.431 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.431 --rc genhtml_branch_coverage=1 00:05:26.431 --rc genhtml_function_coverage=1 00:05:26.431 --rc genhtml_legend=1 00:05:26.431 --rc geninfo_all_blocks=1 00:05:26.431 --rc geninfo_unexecuted_blocks=1 00:05:26.431 00:05:26.432 ' 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.432 --rc genhtml_branch_coverage=1 00:05:26.432 --rc genhtml_function_coverage=1 00:05:26.432 --rc genhtml_legend=1 00:05:26.432 --rc geninfo_all_blocks=1 00:05:26.432 --rc geninfo_unexecuted_blocks=1 00:05:26.432 00:05:26.432 ' 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.432 --rc genhtml_branch_coverage=1 00:05:26.432 --rc genhtml_function_coverage=1 00:05:26.432 --rc genhtml_legend=1 00:05:26.432 --rc geninfo_all_blocks=1 00:05:26.432 --rc geninfo_unexecuted_blocks=1 00:05:26.432 00:05:26.432 ' 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.432 --rc genhtml_branch_coverage=1 00:05:26.432 --rc genhtml_function_coverage=1 00:05:26.432 --rc genhtml_legend=1 00:05:26.432 --rc geninfo_all_blocks=1 00:05:26.432 --rc geninfo_unexecuted_blocks=1 00:05:26.432 00:05:26.432 ' 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58148 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.432 17:39:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58148 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58148 ']' 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.432 17:39:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.432 [2024-11-20 17:39:53.473528] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:26.432 [2024-11-20 17:39:53.473788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58148 ] 00:05:26.693 [2024-11-20 17:39:53.660314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.693 [2024-11-20 17:39:53.806504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.693 [2024-11-20 17:39:53.806541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.075 17:39:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.075 17:39:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:28.075 17:39:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58176 00:05:28.075 17:39:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.075 17:39:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.075 [ 00:05:28.075 "bdev_malloc_delete", 00:05:28.075 "bdev_malloc_create", 00:05:28.075 "bdev_null_resize", 00:05:28.075 "bdev_null_delete", 00:05:28.075 "bdev_null_create", 00:05:28.075 "bdev_nvme_cuse_unregister", 00:05:28.075 "bdev_nvme_cuse_register", 00:05:28.075 "bdev_opal_new_user", 00:05:28.075 "bdev_opal_set_lock_state", 00:05:28.075 "bdev_opal_delete", 00:05:28.075 "bdev_opal_get_info", 00:05:28.075 "bdev_opal_create", 00:05:28.075 "bdev_nvme_opal_revert", 00:05:28.075 "bdev_nvme_opal_init", 00:05:28.075 "bdev_nvme_send_cmd", 00:05:28.075 "bdev_nvme_set_keys", 00:05:28.075 "bdev_nvme_get_path_iostat", 00:05:28.075 "bdev_nvme_get_mdns_discovery_info", 00:05:28.075 "bdev_nvme_stop_mdns_discovery", 00:05:28.075 "bdev_nvme_start_mdns_discovery", 00:05:28.075 "bdev_nvme_set_multipath_policy", 00:05:28.075 "bdev_nvme_set_preferred_path", 00:05:28.075 "bdev_nvme_get_io_paths", 00:05:28.075 "bdev_nvme_remove_error_injection", 00:05:28.075 "bdev_nvme_add_error_injection", 00:05:28.075 "bdev_nvme_get_discovery_info", 00:05:28.075 "bdev_nvme_stop_discovery", 00:05:28.075 "bdev_nvme_start_discovery", 00:05:28.075 "bdev_nvme_get_controller_health_info", 00:05:28.075 "bdev_nvme_disable_controller", 00:05:28.075 "bdev_nvme_enable_controller", 00:05:28.075 "bdev_nvme_reset_controller", 00:05:28.075 "bdev_nvme_get_transport_statistics", 00:05:28.075 "bdev_nvme_apply_firmware", 00:05:28.075 "bdev_nvme_detach_controller", 00:05:28.075 "bdev_nvme_get_controllers", 00:05:28.075 "bdev_nvme_attach_controller", 00:05:28.075 "bdev_nvme_set_hotplug", 00:05:28.075 "bdev_nvme_set_options", 00:05:28.075 "bdev_passthru_delete", 00:05:28.075 "bdev_passthru_create", 00:05:28.075 "bdev_lvol_set_parent_bdev", 00:05:28.075 "bdev_lvol_set_parent", 00:05:28.075 "bdev_lvol_check_shallow_copy", 00:05:28.075 "bdev_lvol_start_shallow_copy", 00:05:28.075 "bdev_lvol_grow_lvstore", 00:05:28.075 "bdev_lvol_get_lvols", 00:05:28.075 "bdev_lvol_get_lvstores", 00:05:28.075 "bdev_lvol_delete", 00:05:28.075 "bdev_lvol_set_read_only", 00:05:28.075 "bdev_lvol_resize", 00:05:28.075 "bdev_lvol_decouple_parent", 00:05:28.075 "bdev_lvol_inflate", 00:05:28.075 "bdev_lvol_rename", 00:05:28.075 "bdev_lvol_clone_bdev", 00:05:28.075 "bdev_lvol_clone", 00:05:28.075 "bdev_lvol_snapshot", 00:05:28.075 "bdev_lvol_create", 00:05:28.075 "bdev_lvol_delete_lvstore", 00:05:28.075 "bdev_lvol_rename_lvstore", 00:05:28.075 "bdev_lvol_create_lvstore", 00:05:28.075 "bdev_raid_set_options", 00:05:28.075 "bdev_raid_remove_base_bdev", 00:05:28.075 "bdev_raid_add_base_bdev", 00:05:28.075 "bdev_raid_delete", 00:05:28.075 "bdev_raid_create", 00:05:28.075 "bdev_raid_get_bdevs", 00:05:28.075 "bdev_error_inject_error", 00:05:28.075 "bdev_error_delete", 00:05:28.075 "bdev_error_create", 00:05:28.075 "bdev_split_delete", 00:05:28.075 "bdev_split_create", 00:05:28.075 "bdev_delay_delete", 00:05:28.075 "bdev_delay_create", 00:05:28.075 "bdev_delay_update_latency", 00:05:28.075 "bdev_zone_block_delete", 00:05:28.075 "bdev_zone_block_create", 00:05:28.075 "blobfs_create", 00:05:28.075 "blobfs_detect", 00:05:28.075 "blobfs_set_cache_size", 00:05:28.075 "bdev_aio_delete", 00:05:28.075 "bdev_aio_rescan", 00:05:28.075 "bdev_aio_create", 00:05:28.075 "bdev_ftl_set_property", 00:05:28.075 "bdev_ftl_get_properties", 00:05:28.075 "bdev_ftl_get_stats", 00:05:28.075 "bdev_ftl_unmap", 00:05:28.075 "bdev_ftl_unload", 00:05:28.075 "bdev_ftl_delete", 00:05:28.075 "bdev_ftl_load", 00:05:28.075 "bdev_ftl_create", 00:05:28.075 "bdev_virtio_attach_controller", 00:05:28.075 "bdev_virtio_scsi_get_devices", 00:05:28.075 "bdev_virtio_detach_controller", 00:05:28.075 "bdev_virtio_blk_set_hotplug", 00:05:28.075 "bdev_iscsi_delete", 00:05:28.075 "bdev_iscsi_create", 00:05:28.075 "bdev_iscsi_set_options", 00:05:28.075 "accel_error_inject_error", 00:05:28.075 "ioat_scan_accel_module", 00:05:28.075 "dsa_scan_accel_module", 00:05:28.075 "iaa_scan_accel_module", 00:05:28.075 "keyring_file_remove_key", 00:05:28.075 "keyring_file_add_key", 00:05:28.075 "keyring_linux_set_options", 00:05:28.075 "fsdev_aio_delete", 00:05:28.075 "fsdev_aio_create", 00:05:28.075 "iscsi_get_histogram", 00:05:28.075 "iscsi_enable_histogram", 00:05:28.075 "iscsi_set_options", 00:05:28.075 "iscsi_get_auth_groups", 00:05:28.075 "iscsi_auth_group_remove_secret", 00:05:28.075 "iscsi_auth_group_add_secret", 00:05:28.075 "iscsi_delete_auth_group", 00:05:28.075 "iscsi_create_auth_group", 00:05:28.075 "iscsi_set_discovery_auth", 00:05:28.075 "iscsi_get_options", 00:05:28.075 "iscsi_target_node_request_logout", 00:05:28.075 "iscsi_target_node_set_redirect", 00:05:28.075 "iscsi_target_node_set_auth", 00:05:28.075 "iscsi_target_node_add_lun", 00:05:28.075 "iscsi_get_stats", 00:05:28.075 "iscsi_get_connections", 00:05:28.075 "iscsi_portal_group_set_auth", 00:05:28.075 "iscsi_start_portal_group", 00:05:28.075 "iscsi_delete_portal_group", 00:05:28.075 "iscsi_create_portal_group", 00:05:28.075 "iscsi_get_portal_groups", 00:05:28.075 "iscsi_delete_target_node", 00:05:28.075 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.075 "iscsi_target_node_add_pg_ig_maps", 00:05:28.075 "iscsi_create_target_node", 00:05:28.075 "iscsi_get_target_nodes", 00:05:28.075 "iscsi_delete_initiator_group", 00:05:28.075 "iscsi_initiator_group_remove_initiators", 00:05:28.075 "iscsi_initiator_group_add_initiators", 00:05:28.075 "iscsi_create_initiator_group", 00:05:28.075 "iscsi_get_initiator_groups", 00:05:28.075 "nvmf_set_crdt", 00:05:28.075 "nvmf_set_config", 00:05:28.075 "nvmf_set_max_subsystems", 00:05:28.075 "nvmf_stop_mdns_prr", 00:05:28.075 "nvmf_publish_mdns_prr", 00:05:28.075 "nvmf_subsystem_get_listeners", 00:05:28.075 "nvmf_subsystem_get_qpairs", 00:05:28.075 "nvmf_subsystem_get_controllers", 00:05:28.075 "nvmf_get_stats", 00:05:28.075 "nvmf_get_transports", 00:05:28.075 "nvmf_create_transport", 00:05:28.075 "nvmf_get_targets", 00:05:28.075 "nvmf_delete_target", 00:05:28.075 "nvmf_create_target", 00:05:28.075 "nvmf_subsystem_allow_any_host", 00:05:28.075 "nvmf_subsystem_set_keys", 00:05:28.075 "nvmf_subsystem_remove_host", 00:05:28.075 "nvmf_subsystem_add_host", 00:05:28.075 "nvmf_ns_remove_host", 00:05:28.075 "nvmf_ns_add_host", 00:05:28.075 "nvmf_subsystem_remove_ns", 00:05:28.075 "nvmf_subsystem_set_ns_ana_group", 00:05:28.075 "nvmf_subsystem_add_ns", 00:05:28.075 "nvmf_subsystem_listener_set_ana_state", 00:05:28.075 "nvmf_discovery_get_referrals", 00:05:28.075 "nvmf_discovery_remove_referral", 00:05:28.075 "nvmf_discovery_add_referral", 00:05:28.075 "nvmf_subsystem_remove_listener", 00:05:28.075 "nvmf_subsystem_add_listener", 00:05:28.075 "nvmf_delete_subsystem", 00:05:28.075 "nvmf_create_subsystem", 00:05:28.075 "nvmf_get_subsystems", 00:05:28.075 "env_dpdk_get_mem_stats", 00:05:28.075 "nbd_get_disks", 00:05:28.075 "nbd_stop_disk", 00:05:28.075 "nbd_start_disk", 00:05:28.075 "ublk_recover_disk", 00:05:28.075 "ublk_get_disks", 00:05:28.075 "ublk_stop_disk", 00:05:28.075 "ublk_start_disk", 00:05:28.075 "ublk_destroy_target", 00:05:28.075 "ublk_create_target", 00:05:28.075 "virtio_blk_create_transport", 00:05:28.075 "virtio_blk_get_transports", 00:05:28.075 "vhost_controller_set_coalescing", 00:05:28.075 "vhost_get_controllers", 00:05:28.075 "vhost_delete_controller", 00:05:28.075 "vhost_create_blk_controller", 00:05:28.075 "vhost_scsi_controller_remove_target", 00:05:28.075 "vhost_scsi_controller_add_target", 00:05:28.075 "vhost_start_scsi_controller", 00:05:28.075 "vhost_create_scsi_controller", 00:05:28.075 "thread_set_cpumask", 00:05:28.075 "scheduler_set_options", 00:05:28.075 "framework_get_governor", 00:05:28.075 "framework_get_scheduler", 00:05:28.075 "framework_set_scheduler", 00:05:28.075 "framework_get_reactors", 00:05:28.075 "thread_get_io_channels", 00:05:28.075 "thread_get_pollers", 00:05:28.075 "thread_get_stats", 00:05:28.075 "framework_monitor_context_switch", 00:05:28.075 "spdk_kill_instance", 00:05:28.075 "log_enable_timestamps", 00:05:28.075 "log_get_flags", 00:05:28.075 "log_clear_flag", 00:05:28.075 "log_set_flag", 00:05:28.075 "log_get_level", 00:05:28.075 "log_set_level", 00:05:28.075 "log_get_print_level", 00:05:28.075 "log_set_print_level", 00:05:28.075 "framework_enable_cpumask_locks", 00:05:28.075 "framework_disable_cpumask_locks", 00:05:28.075 "framework_wait_init", 00:05:28.075 "framework_start_init", 00:05:28.075 "scsi_get_devices", 00:05:28.076 "bdev_get_histogram", 00:05:28.076 "bdev_enable_histogram", 00:05:28.076 "bdev_set_qos_limit", 00:05:28.076 "bdev_set_qd_sampling_period", 00:05:28.076 "bdev_get_bdevs", 00:05:28.076 "bdev_reset_iostat", 00:05:28.076 "bdev_get_iostat", 00:05:28.076 "bdev_examine", 00:05:28.076 "bdev_wait_for_examine", 00:05:28.076 "bdev_set_options", 00:05:28.076 "accel_get_stats", 00:05:28.076 "accel_set_options", 00:05:28.076 "accel_set_driver", 00:05:28.076 "accel_crypto_key_destroy", 00:05:28.076 "accel_crypto_keys_get", 00:05:28.076 "accel_crypto_key_create", 00:05:28.076 "accel_assign_opc", 00:05:28.076 "accel_get_module_info", 00:05:28.076 "accel_get_opc_assignments", 00:05:28.076 "vmd_rescan", 00:05:28.076 "vmd_remove_device", 00:05:28.076 "vmd_enable", 00:05:28.076 "sock_get_default_impl", 00:05:28.076 "sock_set_default_impl", 00:05:28.076 "sock_impl_set_options", 00:05:28.076 "sock_impl_get_options", 00:05:28.076 "iobuf_get_stats", 00:05:28.076 "iobuf_set_options", 00:05:28.076 "keyring_get_keys", 00:05:28.076 "framework_get_pci_devices", 00:05:28.076 "framework_get_config", 00:05:28.076 "framework_get_subsystems", 00:05:28.076 "fsdev_set_opts", 00:05:28.076 "fsdev_get_opts", 00:05:28.076 "trace_get_info", 00:05:28.076 "trace_get_tpoint_group_mask", 00:05:28.076 "trace_disable_tpoint_group", 00:05:28.076 "trace_enable_tpoint_group", 00:05:28.076 "trace_clear_tpoint_mask", 00:05:28.076 "trace_set_tpoint_mask", 00:05:28.076 "notify_get_notifications", 00:05:28.076 "notify_get_types", 00:05:28.076 "spdk_get_version", 00:05:28.076 "rpc_get_methods" 00:05:28.076 ] 00:05:28.076 17:39:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:28.076 17:39:55 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.076 17:39:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.335 17:39:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:28.335 17:39:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58148 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58148 ']' 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58148 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58148 00:05:28.335 killing process with pid 58148 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58148' 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58148 00:05:28.335 17:39:55 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58148 00:05:31.630 ************************************ 00:05:31.630 END TEST spdkcli_tcp 00:05:31.630 ************************************ 00:05:31.630 00:05:31.630 real 0m5.077s 00:05:31.630 user 0m8.964s 00:05:31.630 sys 0m0.856s 00:05:31.630 17:39:58 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.630 17:39:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.630 17:39:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.630 17:39:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.630 17:39:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.630 17:39:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.630 ************************************ 00:05:31.630 START TEST dpdk_mem_utility 00:05:31.630 ************************************ 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.630 * Looking for test storage... 00:05:31.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.630 17:39:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.630 --rc genhtml_branch_coverage=1 00:05:31.630 --rc genhtml_function_coverage=1 00:05:31.630 --rc genhtml_legend=1 00:05:31.630 --rc geninfo_all_blocks=1 00:05:31.630 --rc geninfo_unexecuted_blocks=1 00:05:31.630 00:05:31.630 ' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.630 --rc genhtml_branch_coverage=1 00:05:31.630 --rc genhtml_function_coverage=1 00:05:31.630 --rc genhtml_legend=1 00:05:31.630 --rc geninfo_all_blocks=1 00:05:31.630 --rc geninfo_unexecuted_blocks=1 00:05:31.630 00:05:31.630 ' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.630 --rc genhtml_branch_coverage=1 00:05:31.630 --rc genhtml_function_coverage=1 00:05:31.630 --rc genhtml_legend=1 00:05:31.630 --rc geninfo_all_blocks=1 00:05:31.630 --rc geninfo_unexecuted_blocks=1 00:05:31.630 00:05:31.630 ' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.630 --rc genhtml_branch_coverage=1 00:05:31.630 --rc genhtml_function_coverage=1 00:05:31.630 --rc genhtml_legend=1 00:05:31.630 --rc geninfo_all_blocks=1 00:05:31.630 --rc geninfo_unexecuted_blocks=1 00:05:31.630 00:05:31.630 ' 00:05:31.630 17:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:31.630 17:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58281 00:05:31.630 17:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.630 17:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58281 00:05:31.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58281 ']' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.630 17:39:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.630 [2024-11-20 17:39:58.611937] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:31.630 [2024-11-20 17:39:58.612278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58281 ] 00:05:31.630 [2024-11-20 17:39:58.794430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.889 [2024-11-20 17:39:58.943771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.270 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.270 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:33.270 17:40:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.270 17:40:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.270 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.271 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.271 { 00:05:33.271 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.271 } 00:05:33.271 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.271 17:40:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:33.271 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:33.271 1 heaps totaling size 824.000000 MiB 00:05:33.271 size: 824.000000 MiB heap id: 0 00:05:33.271 end heaps---------- 00:05:33.271 9 mempools totaling size 603.782043 MiB 00:05:33.271 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.271 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.271 size: 100.555481 MiB name: bdev_io_58281 00:05:33.271 size: 50.003479 MiB name: msgpool_58281 00:05:33.271 size: 36.509338 MiB name: fsdev_io_58281 00:05:33.271 size: 21.763794 MiB name: PDU_Pool 00:05:33.271 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.271 size: 4.133484 MiB name: evtpool_58281 00:05:33.271 size: 0.026123 MiB name: Session_Pool 00:05:33.271 end mempools------- 00:05:33.271 6 memzones totaling size 4.142822 MiB 00:05:33.271 size: 1.000366 MiB name: RG_ring_0_58281 00:05:33.271 size: 1.000366 MiB name: RG_ring_1_58281 00:05:33.271 size: 1.000366 MiB name: RG_ring_4_58281 00:05:33.271 size: 1.000366 MiB name: RG_ring_5_58281 00:05:33.271 size: 0.125366 MiB name: RG_ring_2_58281 00:05:33.271 size: 0.015991 MiB name: RG_ring_3_58281 00:05:33.271 end memzones------- 00:05:33.271 17:40:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.271 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:05:33.271 list of free elements. size: 16.779907 MiB 00:05:33.271 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:33.271 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:33.271 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:33.271 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:33.271 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:33.271 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:33.271 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:33.271 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:33.271 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:33.271 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:33.271 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:33.271 element at address: 0x20001b400000 with size: 0.561218 MiB 00:05:33.271 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:33.271 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:33.271 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:33.271 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:33.271 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:33.271 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:33.271 list of standard malloc elements. size: 199.289185 MiB 00:05:33.271 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:33.271 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:33.271 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:33.271 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:33.271 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:33.271 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:33.271 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:33.271 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:33.271 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:33.271 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:33.271 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:33.271 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:33.271 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:33.271 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:33.272 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:33.273 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:33.273 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:33.273 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:33.273 list of memzone associated elements. size: 607.930908 MiB 00:05:33.273 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:33.273 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.273 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:33.273 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.273 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:33.273 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58281_0 00:05:33.273 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:33.273 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58281_0 00:05:33.273 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:33.273 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58281_0 00:05:33.273 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:33.273 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.273 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:33.273 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.273 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:33.273 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58281_0 00:05:33.273 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:33.273 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58281 00:05:33.273 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:33.273 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58281 00:05:33.273 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:33.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.273 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:33.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.273 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:33.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.273 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:33.273 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.273 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:33.273 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58281 00:05:33.273 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:33.273 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58281 00:05:33.273 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:33.273 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58281 00:05:33.273 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:33.273 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58281 00:05:33.273 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:33.273 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58281 00:05:33.273 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:33.273 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58281 00:05:33.273 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:33.273 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.273 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:33.273 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.273 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:33.273 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.273 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:33.273 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58281 00:05:33.273 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:33.273 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58281 00:05:33.273 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:33.273 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.273 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:33.273 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.273 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:33.273 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58281 00:05:33.273 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:33.274 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.274 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:33.274 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58281 00:05:33.274 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:33.274 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58281 00:05:33.274 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:33.274 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58281 00:05:33.274 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:33.274 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.274 17:40:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.274 17:40:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58281 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58281 ']' 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58281 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58281 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58281' 00:05:33.274 killing process with pid 58281 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58281 00:05:33.274 17:40:00 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58281 00:05:36.562 00:05:36.562 real 0m4.768s 00:05:36.562 user 0m4.565s 00:05:36.562 sys 0m0.818s 00:05:36.562 17:40:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.562 17:40:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.562 ************************************ 00:05:36.562 END TEST dpdk_mem_utility 00:05:36.562 ************************************ 00:05:36.562 17:40:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:36.562 17:40:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.562 17:40:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.562 17:40:03 -- common/autotest_common.sh@10 -- # set +x 00:05:36.562 ************************************ 00:05:36.562 START TEST event 00:05:36.562 ************************************ 00:05:36.562 17:40:03 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:36.562 * Looking for test storage... 00:05:36.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.563 17:40:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.563 17:40:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.563 17:40:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.563 17:40:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.563 17:40:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.563 17:40:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.563 17:40:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.563 17:40:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.563 17:40:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.563 17:40:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.563 17:40:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.563 17:40:03 event -- scripts/common.sh@344 -- # case "$op" in 00:05:36.563 17:40:03 event -- scripts/common.sh@345 -- # : 1 00:05:36.563 17:40:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.563 17:40:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.563 17:40:03 event -- scripts/common.sh@365 -- # decimal 1 00:05:36.563 17:40:03 event -- scripts/common.sh@353 -- # local d=1 00:05:36.563 17:40:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.563 17:40:03 event -- scripts/common.sh@355 -- # echo 1 00:05:36.563 17:40:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.563 17:40:03 event -- scripts/common.sh@366 -- # decimal 2 00:05:36.563 17:40:03 event -- scripts/common.sh@353 -- # local d=2 00:05:36.563 17:40:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.563 17:40:03 event -- scripts/common.sh@355 -- # echo 2 00:05:36.563 17:40:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.563 17:40:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.563 17:40:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.563 17:40:03 event -- scripts/common.sh@368 -- # return 0 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.563 --rc genhtml_branch_coverage=1 00:05:36.563 --rc genhtml_function_coverage=1 00:05:36.563 --rc genhtml_legend=1 00:05:36.563 --rc geninfo_all_blocks=1 00:05:36.563 --rc geninfo_unexecuted_blocks=1 00:05:36.563 00:05:36.563 ' 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.563 --rc genhtml_branch_coverage=1 00:05:36.563 --rc genhtml_function_coverage=1 00:05:36.563 --rc genhtml_legend=1 00:05:36.563 --rc geninfo_all_blocks=1 00:05:36.563 --rc geninfo_unexecuted_blocks=1 00:05:36.563 00:05:36.563 ' 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.563 --rc genhtml_branch_coverage=1 00:05:36.563 --rc genhtml_function_coverage=1 00:05:36.563 --rc genhtml_legend=1 00:05:36.563 --rc geninfo_all_blocks=1 00:05:36.563 --rc geninfo_unexecuted_blocks=1 00:05:36.563 00:05:36.563 ' 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.563 --rc genhtml_branch_coverage=1 00:05:36.563 --rc genhtml_function_coverage=1 00:05:36.563 --rc genhtml_legend=1 00:05:36.563 --rc geninfo_all_blocks=1 00:05:36.563 --rc geninfo_unexecuted_blocks=1 00:05:36.563 00:05:36.563 ' 00:05:36.563 17:40:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:36.563 17:40:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.563 17:40:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:36.563 17:40:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.563 17:40:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.563 ************************************ 00:05:36.563 START TEST event_perf 00:05:36.563 ************************************ 00:05:36.563 17:40:03 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.563 Running I/O for 1 seconds...[2024-11-20 17:40:03.401799] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:36.563 [2024-11-20 17:40:03.401947] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58400 ] 00:05:36.563 [2024-11-20 17:40:03.583350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.563 [2024-11-20 17:40:03.712808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.563 [2024-11-20 17:40:03.713202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.563 [2024-11-20 17:40:03.713177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.563 [2024-11-20 17:40:03.713002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.939 Running I/O for 1 seconds... 00:05:37.939 lcore 0: 76398 00:05:37.939 lcore 1: 76388 00:05:37.939 lcore 2: 76391 00:05:37.939 lcore 3: 76395 00:05:37.939 done. 00:05:37.939 00:05:37.939 real 0m1.640s 00:05:37.939 user 0m4.379s 00:05:37.939 sys 0m0.136s 00:05:37.939 17:40:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.939 17:40:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.939 ************************************ 00:05:37.939 END TEST event_perf 00:05:37.939 ************************************ 00:05:37.939 17:40:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.939 17:40:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.939 17:40:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.939 17:40:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.939 ************************************ 00:05:37.939 START TEST event_reactor 00:05:37.939 ************************************ 00:05:37.939 17:40:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.939 [2024-11-20 17:40:05.112451] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:37.939 [2024-11-20 17:40:05.112660] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58434 ] 00:05:38.198 [2024-11-20 17:40:05.295864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.457 [2024-11-20 17:40:05.422794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.835 test_start 00:05:39.835 oneshot 00:05:39.835 tick 100 00:05:39.835 tick 100 00:05:39.835 tick 250 00:05:39.835 tick 100 00:05:39.835 tick 100 00:05:39.835 tick 100 00:05:39.835 tick 250 00:05:39.835 tick 500 00:05:39.835 tick 100 00:05:39.835 tick 100 00:05:39.835 tick 250 00:05:39.835 tick 100 00:05:39.835 tick 100 00:05:39.835 test_end 00:05:39.835 00:05:39.835 real 0m1.606s 00:05:39.835 user 0m1.393s 00:05:39.835 sys 0m0.103s 00:05:39.835 17:40:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.835 17:40:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.835 ************************************ 00:05:39.835 END TEST event_reactor 00:05:39.835 ************************************ 00:05:39.835 17:40:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.835 17:40:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.835 17:40:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.835 17:40:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.835 ************************************ 00:05:39.835 START TEST event_reactor_perf 00:05:39.835 ************************************ 00:05:39.835 17:40:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.835 [2024-11-20 17:40:06.790827] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:39.835 [2024-11-20 17:40:06.791078] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58476 ] 00:05:39.835 [2024-11-20 17:40:06.967811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.094 [2024-11-20 17:40:07.094133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.473 test_start 00:05:41.473 test_end 00:05:41.473 Performance: 353541 events per second 00:05:41.473 00:05:41.473 real 0m1.594s 00:05:41.473 user 0m1.374s 00:05:41.473 sys 0m0.111s 00:05:41.473 ************************************ 00:05:41.473 END TEST event_reactor_perf 00:05:41.473 ************************************ 00:05:41.473 17:40:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.473 17:40:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.473 17:40:08 event -- event/event.sh@49 -- # uname -s 00:05:41.473 17:40:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.473 17:40:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:41.473 17:40:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.474 17:40:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.474 17:40:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.474 ************************************ 00:05:41.474 START TEST event_scheduler 00:05:41.474 ************************************ 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:41.474 * Looking for test storage... 00:05:41.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.474 17:40:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.474 --rc genhtml_branch_coverage=1 00:05:41.474 --rc genhtml_function_coverage=1 00:05:41.474 --rc genhtml_legend=1 00:05:41.474 --rc geninfo_all_blocks=1 00:05:41.474 --rc geninfo_unexecuted_blocks=1 00:05:41.474 00:05:41.474 ' 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.474 --rc genhtml_branch_coverage=1 00:05:41.474 --rc genhtml_function_coverage=1 00:05:41.474 --rc genhtml_legend=1 00:05:41.474 --rc geninfo_all_blocks=1 00:05:41.474 --rc geninfo_unexecuted_blocks=1 00:05:41.474 00:05:41.474 ' 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.474 --rc genhtml_branch_coverage=1 00:05:41.474 --rc genhtml_function_coverage=1 00:05:41.474 --rc genhtml_legend=1 00:05:41.474 --rc geninfo_all_blocks=1 00:05:41.474 --rc geninfo_unexecuted_blocks=1 00:05:41.474 00:05:41.474 ' 00:05:41.474 17:40:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.474 --rc genhtml_branch_coverage=1 00:05:41.474 --rc genhtml_function_coverage=1 00:05:41.474 --rc genhtml_legend=1 00:05:41.474 --rc geninfo_all_blocks=1 00:05:41.474 --rc geninfo_unexecuted_blocks=1 00:05:41.474 00:05:41.474 ' 00:05:41.734 17:40:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.734 17:40:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58552 00:05:41.734 17:40:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.734 17:40:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.734 17:40:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58552 00:05:41.734 17:40:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58552 ']' 00:05:41.734 17:40:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.734 17:40:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.734 17:40:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.734 17:40:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.734 17:40:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.734 [2024-11-20 17:40:08.744981] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:41.734 [2024-11-20 17:40:08.745130] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58552 ] 00:05:41.993 [2024-11-20 17:40:08.928434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.993 [2024-11-20 17:40:09.074341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.993 [2024-11-20 17:40:09.074539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.993 [2024-11-20 17:40:09.074688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.993 [2024-11-20 17:40:09.074734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:42.563 17:40:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.563 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.563 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.563 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.563 POWER: Cannot set governor of lcore 0 to performance 00:05:42.563 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.563 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.563 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.563 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.563 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:42.563 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:42.563 POWER: Unable to set Power Management Environment for lcore 0 00:05:42.563 [2024-11-20 17:40:09.619645] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:42.563 [2024-11-20 17:40:09.619694] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:42.563 [2024-11-20 17:40:09.619728] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.563 [2024-11-20 17:40:09.619773] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.563 [2024-11-20 17:40:09.619783] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.563 [2024-11-20 17:40:09.619793] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.563 17:40:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.563 17:40:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.132 [2024-11-20 17:40:10.019121] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.132 17:40:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.132 17:40:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.132 17:40:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.132 17:40:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.132 17:40:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.132 ************************************ 00:05:43.132 START TEST scheduler_create_thread 00:05:43.132 ************************************ 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.132 2 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.132 3 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.132 4 00:05:43.132 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 5 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 6 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 7 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 8 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 9 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.133 10 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.133 17:40:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.512 17:40:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.512 17:40:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.512 17:40:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.512 17:40:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.512 17:40:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.445 17:40:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.445 17:40:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.445 17:40:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.445 17:40:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.012 17:40:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.012 17:40:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.012 17:40:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.012 17:40:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.012 17:40:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.947 ************************************ 00:05:46.947 END TEST scheduler_create_thread 00:05:46.947 ************************************ 00:05:46.947 17:40:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.947 00:05:46.947 real 0m3.886s 00:05:46.947 user 0m0.029s 00:05:46.947 sys 0m0.010s 00:05:46.947 17:40:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.947 17:40:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.947 17:40:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.947 17:40:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58552 00:05:46.947 17:40:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58552 ']' 00:05:46.947 17:40:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58552 00:05:46.947 17:40:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:46.947 17:40:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.947 17:40:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58552 00:05:46.947 killing process with pid 58552 00:05:46.947 17:40:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:46.947 17:40:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:46.947 17:40:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58552' 00:05:46.947 17:40:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58552 00:05:46.947 17:40:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58552 00:05:47.205 [2024-11-20 17:40:14.299723] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.584 00:05:48.584 real 0m7.211s 00:05:48.584 user 0m14.786s 00:05:48.584 sys 0m0.615s 00:05:48.584 ************************************ 00:05:48.584 END TEST event_scheduler 00:05:48.584 ************************************ 00:05:48.584 17:40:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.584 17:40:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.584 17:40:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.584 17:40:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.584 17:40:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.584 17:40:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.584 17:40:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.584 ************************************ 00:05:48.584 START TEST app_repeat 00:05:48.584 ************************************ 00:05:48.584 17:40:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58679 00:05:48.584 17:40:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.585 17:40:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.585 Process app_repeat pid: 58679 00:05:48.585 spdk_app_start Round 0 00:05:48.585 17:40:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58679' 00:05:48.585 17:40:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.585 17:40:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.585 17:40:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58679 /var/tmp/spdk-nbd.sock 00:05:48.585 17:40:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58679 ']' 00:05:48.585 17:40:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.585 17:40:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.585 17:40:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.585 17:40:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.585 17:40:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.844 [2024-11-20 17:40:15.761023] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:48.844 [2024-11-20 17:40:15.761290] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ] 00:05:48.844 [2024-11-20 17:40:15.929942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.104 [2024-11-20 17:40:16.060846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.104 [2024-11-20 17:40:16.060886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.674 17:40:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.674 17:40:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.674 17:40:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.934 Malloc0 00:05:49.934 17:40:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.194 Malloc1 00:05:50.194 17:40:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.194 17:40:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.453 /dev/nbd0 00:05:50.453 17:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.453 17:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.453 1+0 records in 00:05:50.453 1+0 records out 00:05:50.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413483 s, 9.9 MB/s 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.453 17:40:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.453 17:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.453 17:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.453 17:40:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.712 /dev/nbd1 00:05:50.712 17:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.712 17:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.712 1+0 records in 00:05:50.712 1+0 records out 00:05:50.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435247 s, 9.4 MB/s 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.712 17:40:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.712 17:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.712 17:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.713 17:40:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.713 17:40:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.972 17:40:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.972 17:40:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.972 { 00:05:50.972 "nbd_device": "/dev/nbd0", 00:05:50.972 "bdev_name": "Malloc0" 00:05:50.972 }, 00:05:50.972 { 00:05:50.972 "nbd_device": "/dev/nbd1", 00:05:50.972 "bdev_name": "Malloc1" 00:05:50.972 } 00:05:50.972 ]' 00:05:50.972 17:40:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.972 17:40:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.972 { 00:05:50.972 "nbd_device": "/dev/nbd0", 00:05:50.972 "bdev_name": "Malloc0" 00:05:50.972 }, 00:05:50.972 { 00:05:50.972 "nbd_device": "/dev/nbd1", 00:05:50.972 "bdev_name": "Malloc1" 00:05:50.972 } 00:05:50.972 ]' 00:05:50.972 17:40:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.972 /dev/nbd1' 00:05:50.972 17:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.231 /dev/nbd1' 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.231 256+0 records in 00:05:51.231 256+0 records out 00:05:51.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132079 s, 79.4 MB/s 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.231 256+0 records in 00:05:51.231 256+0 records out 00:05:51.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237469 s, 44.2 MB/s 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.231 256+0 records in 00:05:51.231 256+0 records out 00:05:51.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333948 s, 31.4 MB/s 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.231 17:40:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.490 17:40:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.748 17:40:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.007 17:40:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.007 17:40:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.579 17:40:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.967 [2024-11-20 17:40:20.792715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.967 [2024-11-20 17:40:20.920467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.967 [2024-11-20 17:40:20.920467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.967 [2024-11-20 17:40:21.134638] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.967 [2024-11-20 17:40:21.134751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.875 spdk_app_start Round 1 00:05:55.875 17:40:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.875 17:40:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.875 17:40:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58679 /var/tmp/spdk-nbd.sock 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58679 ']' 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.875 17:40:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.875 17:40:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.135 Malloc0 00:05:56.135 17:40:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.396 Malloc1 00:05:56.396 17:40:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.396 17:40:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.656 /dev/nbd0 00:05:56.656 17:40:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.656 17:40:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.656 1+0 records in 00:05:56.656 1+0 records out 00:05:56.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328881 s, 12.5 MB/s 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.656 17:40:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.656 17:40:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.656 17:40:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.656 17:40:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.917 /dev/nbd1 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.917 1+0 records in 00:05:56.917 1+0 records out 00:05:56.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433939 s, 9.4 MB/s 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.917 17:40:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.917 17:40:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.177 { 00:05:57.177 "nbd_device": "/dev/nbd0", 00:05:57.177 "bdev_name": "Malloc0" 00:05:57.177 }, 00:05:57.177 { 00:05:57.177 "nbd_device": "/dev/nbd1", 00:05:57.177 "bdev_name": "Malloc1" 00:05:57.177 } 00:05:57.177 ]' 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.177 { 00:05:57.177 "nbd_device": "/dev/nbd0", 00:05:57.177 "bdev_name": "Malloc0" 00:05:57.177 }, 00:05:57.177 { 00:05:57.177 "nbd_device": "/dev/nbd1", 00:05:57.177 "bdev_name": "Malloc1" 00:05:57.177 } 00:05:57.177 ]' 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.177 /dev/nbd1' 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.177 /dev/nbd1' 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.177 17:40:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.178 256+0 records in 00:05:57.178 256+0 records out 00:05:57.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046693 s, 225 MB/s 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.178 256+0 records in 00:05:57.178 256+0 records out 00:05:57.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239617 s, 43.8 MB/s 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.178 256+0 records in 00:05:57.178 256+0 records out 00:05:57.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286273 s, 36.6 MB/s 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.178 17:40:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.438 17:40:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.697 17:40:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.956 17:40:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.956 17:40:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.525 17:40:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.464 [2024-11-20 17:40:26.610770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.724 [2024-11-20 17:40:26.734090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.724 [2024-11-20 17:40:26.734114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.984 [2024-11-20 17:40:26.982094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.985 [2024-11-20 17:40:26.982272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.373 17:40:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.373 17:40:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.373 spdk_app_start Round 2 00:06:01.373 17:40:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58679 /var/tmp/spdk-nbd.sock 00:06:01.373 17:40:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58679 ']' 00:06:01.373 17:40:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.373 17:40:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.373 17:40:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.373 17:40:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.373 17:40:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.632 17:40:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.632 17:40:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.632 17:40:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.891 Malloc0 00:06:01.891 17:40:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.213 Malloc1 00:06:02.213 17:40:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.213 17:40:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.471 /dev/nbd0 00:06:02.471 17:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.471 17:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.471 1+0 records in 00:06:02.471 1+0 records out 00:06:02.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232892 s, 17.6 MB/s 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.471 17:40:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.471 17:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.471 17:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.471 17:40:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.729 /dev/nbd1 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.729 1+0 records in 00:06:02.729 1+0 records out 00:06:02.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398291 s, 10.3 MB/s 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.729 17:40:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.729 17:40:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.987 17:40:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.987 { 00:06:02.987 "nbd_device": "/dev/nbd0", 00:06:02.987 "bdev_name": "Malloc0" 00:06:02.987 }, 00:06:02.987 { 00:06:02.987 "nbd_device": "/dev/nbd1", 00:06:02.987 "bdev_name": "Malloc1" 00:06:02.987 } 00:06:02.987 ]' 00:06:02.987 17:40:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.987 17:40:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.987 { 00:06:02.987 "nbd_device": "/dev/nbd0", 00:06:02.987 "bdev_name": "Malloc0" 00:06:02.987 }, 00:06:02.987 { 00:06:02.987 "nbd_device": "/dev/nbd1", 00:06:02.987 "bdev_name": "Malloc1" 00:06:02.987 } 00:06:02.987 ]' 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.987 /dev/nbd1' 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.987 /dev/nbd1' 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.987 256+0 records in 00:06:02.987 256+0 records out 00:06:02.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124895 s, 84.0 MB/s 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.987 256+0 records in 00:06:02.987 256+0 records out 00:06:02.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275734 s, 38.0 MB/s 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.987 17:40:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.987 256+0 records in 00:06:02.987 256+0 records out 00:06:02.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301699 s, 34.8 MB/s 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.988 17:40:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.246 17:40:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.506 17:40:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.766 17:40:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.766 17:40:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.335 17:40:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.716 [2024-11-20 17:40:32.472818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.716 [2024-11-20 17:40:32.590460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.716 [2024-11-20 17:40:32.590460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.716 [2024-11-20 17:40:32.792352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.716 [2024-11-20 17:40:32.792463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.621 17:40:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58679 /var/tmp/spdk-nbd.sock 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58679 ']' 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.621 17:40:34 event.app_repeat -- event/event.sh@39 -- # killprocess 58679 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58679 ']' 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58679 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58679 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58679' 00:06:07.621 killing process with pid 58679 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58679 00:06:07.621 17:40:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58679 00:06:08.558 spdk_app_start is called in Round 0. 00:06:08.558 Shutdown signal received, stop current app iteration 00:06:08.558 Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 reinitialization... 00:06:08.558 spdk_app_start is called in Round 1. 00:06:08.558 Shutdown signal received, stop current app iteration 00:06:08.558 Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 reinitialization... 00:06:08.558 spdk_app_start is called in Round 2. 00:06:08.558 Shutdown signal received, stop current app iteration 00:06:08.558 Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 reinitialization... 00:06:08.558 spdk_app_start is called in Round 3. 00:06:08.558 Shutdown signal received, stop current app iteration 00:06:08.817 17:40:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.817 17:40:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.817 00:06:08.817 real 0m20.053s 00:06:08.817 user 0m43.216s 00:06:08.817 sys 0m2.706s 00:06:08.817 17:40:35 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.817 ************************************ 00:06:08.817 END TEST app_repeat 00:06:08.817 ************************************ 00:06:08.817 17:40:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.817 17:40:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.817 17:40:35 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.817 17:40:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.817 17:40:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.817 17:40:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.817 ************************************ 00:06:08.818 START TEST cpu_locks 00:06:08.818 ************************************ 00:06:08.818 17:40:35 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.818 * Looking for test storage... 00:06:08.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:08.818 17:40:35 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.818 17:40:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.818 17:40:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.077 17:40:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.077 17:40:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:09.077 17:40:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.077 17:40:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.077 --rc genhtml_branch_coverage=1 00:06:09.077 --rc genhtml_function_coverage=1 00:06:09.077 --rc genhtml_legend=1 00:06:09.077 --rc geninfo_all_blocks=1 00:06:09.077 --rc geninfo_unexecuted_blocks=1 00:06:09.077 00:06:09.077 ' 00:06:09.077 17:40:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.077 --rc genhtml_branch_coverage=1 00:06:09.077 --rc genhtml_function_coverage=1 00:06:09.077 --rc genhtml_legend=1 00:06:09.077 --rc geninfo_all_blocks=1 00:06:09.077 --rc geninfo_unexecuted_blocks=1 00:06:09.077 00:06:09.077 ' 00:06:09.077 17:40:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.077 --rc genhtml_branch_coverage=1 00:06:09.077 --rc genhtml_function_coverage=1 00:06:09.077 --rc genhtml_legend=1 00:06:09.077 --rc geninfo_all_blocks=1 00:06:09.077 --rc geninfo_unexecuted_blocks=1 00:06:09.077 00:06:09.077 ' 00:06:09.077 17:40:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.077 --rc genhtml_branch_coverage=1 00:06:09.077 --rc genhtml_function_coverage=1 00:06:09.078 --rc genhtml_legend=1 00:06:09.078 --rc geninfo_all_blocks=1 00:06:09.078 --rc geninfo_unexecuted_blocks=1 00:06:09.078 00:06:09.078 ' 00:06:09.078 17:40:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.078 17:40:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.078 17:40:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.078 17:40:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.078 17:40:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.078 17:40:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.078 17:40:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.078 ************************************ 00:06:09.078 START TEST default_locks 00:06:09.078 ************************************ 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:09.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59138 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59138 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59138 ']' 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.078 17:40:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.078 [2024-11-20 17:40:36.154316] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:09.078 [2024-11-20 17:40:36.154455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:06:09.336 [2024-11-20 17:40:36.334454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.336 [2024-11-20 17:40:36.475092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59138 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59138 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59138 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59138 ']' 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59138 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.757 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59138 00:06:11.016 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.016 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.016 killing process with pid 59138 00:06:11.016 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59138' 00:06:11.016 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59138 00:06:11.016 17:40:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59138 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59138 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59138 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59138 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59138 ']' 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.546 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59138) - No such process 00:06:13.546 ERROR: process (pid: 59138) is no longer running 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.546 00:06:13.546 real 0m4.637s 00:06:13.546 user 0m4.410s 00:06:13.546 sys 0m0.848s 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.546 17:40:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.546 ************************************ 00:06:13.546 END TEST default_locks 00:06:13.546 ************************************ 00:06:13.804 17:40:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.804 17:40:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.804 17:40:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.804 17:40:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.804 ************************************ 00:06:13.804 START TEST default_locks_via_rpc 00:06:13.804 ************************************ 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59214 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59214 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.804 17:40:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.804 [2024-11-20 17:40:40.846785] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:13.804 [2024-11-20 17:40:40.846910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:06:14.062 [2024-11-20 17:40:41.021084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.062 [2024-11-20 17:40:41.169257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59214 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59214 00:06:15.439 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59214 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59214 ']' 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59214 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59214 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.697 killing process with pid 59214 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59214' 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59214 00:06:15.697 17:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59214 00:06:18.990 00:06:18.990 real 0m4.834s 00:06:18.990 user 0m4.605s 00:06:18.990 sys 0m0.878s 00:06:18.990 17:40:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.990 17:40:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 ************************************ 00:06:18.990 END TEST default_locks_via_rpc 00:06:18.990 ************************************ 00:06:18.990 17:40:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.990 17:40:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.990 17:40:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.990 17:40:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 ************************************ 00:06:18.990 START TEST non_locking_app_on_locked_coremask 00:06:18.990 ************************************ 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59300 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59300 /var/tmp/spdk.sock 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59300 ']' 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.990 17:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.990 [2024-11-20 17:40:45.760890] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:18.990 [2024-11-20 17:40:45.761054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59300 ] 00:06:18.990 [2024-11-20 17:40:45.933212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.990 [2024-11-20 17:40:46.078907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59316 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59316 /var/tmp/spdk2.sock 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59316 ']' 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.376 17:40:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.376 [2024-11-20 17:40:47.234069] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:20.376 [2024-11-20 17:40:47.234191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:06:20.376 [2024-11-20 17:40:47.415879] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.376 [2024-11-20 17:40:47.415932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.640 [2024-11-20 17:40:47.712067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.181 17:40:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.181 17:40:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.181 17:40:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59300 00:06:23.181 17:40:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59300 00:06:23.181 17:40:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59300 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59300 ']' 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59300 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59300 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.440 killing process with pid 59300 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59300' 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59300 00:06:23.440 17:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59300 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59316 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59316 ']' 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59316 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59316 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59316' 00:06:30.011 killing process with pid 59316 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59316 00:06:30.011 17:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59316 00:06:31.912 00:06:31.912 real 0m13.242s 00:06:31.912 user 0m13.222s 00:06:31.912 sys 0m1.620s 00:06:31.912 17:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.912 17:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.912 ************************************ 00:06:31.912 END TEST non_locking_app_on_locked_coremask 00:06:31.912 ************************************ 00:06:31.912 17:40:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:31.913 17:40:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.913 17:40:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.913 17:40:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.913 ************************************ 00:06:31.913 START TEST locking_app_on_unlocked_coremask 00:06:31.913 ************************************ 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59481 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59481 /var/tmp/spdk.sock 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59481 ']' 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.913 17:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.913 [2024-11-20 17:40:59.052149] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:31.913 [2024-11-20 17:40:59.052299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59481 ] 00:06:32.175 [2024-11-20 17:40:59.228578] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.175 [2024-11-20 17:40:59.228628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.434 [2024-11-20 17:40:59.375722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59502 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59502 /var/tmp/spdk2.sock 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59502 ']' 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.370 17:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.628 [2024-11-20 17:41:00.589189] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:33.628 [2024-11-20 17:41:00.589333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:06:33.628 [2024-11-20 17:41:00.773456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.195 [2024-11-20 17:41:01.070824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.099 17:41:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.099 17:41:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.099 17:41:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59502 00:06:36.099 17:41:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.099 17:41:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59502 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59481 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59481 ']' 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59481 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59481 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.056 killing process with pid 59481 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59481' 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59481 00:06:37.056 17:41:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59481 00:06:43.641 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59502 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59502 ']' 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59502 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59502 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59502' 00:06:43.642 killing process with pid 59502 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59502 00:06:43.642 17:41:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59502 00:06:45.547 00:06:45.547 real 0m13.733s 00:06:45.547 user 0m13.797s 00:06:45.547 sys 0m1.737s 00:06:45.547 17:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.547 17:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 ************************************ 00:06:45.547 END TEST locking_app_on_unlocked_coremask 00:06:45.547 ************************************ 00:06:45.807 17:41:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.807 17:41:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.807 17:41:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.807 17:41:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.807 ************************************ 00:06:45.807 START TEST locking_app_on_locked_coremask 00:06:45.807 ************************************ 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59667 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59667 /var/tmp/spdk.sock 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59667 ']' 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.807 17:41:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.807 [2024-11-20 17:41:12.889034] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:45.807 [2024-11-20 17:41:12.889179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59667 ] 00:06:46.066 [2024-11-20 17:41:13.074724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.066 [2024-11-20 17:41:13.199493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59690 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59690 /var/tmp/spdk2.sock 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59690 /var/tmp/spdk2.sock 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59690 /var/tmp/spdk2.sock 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59690 ']' 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.006 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.289 [2024-11-20 17:41:14.250590] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:47.289 [2024-11-20 17:41:14.250719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59690 ] 00:06:47.289 [2024-11-20 17:41:14.426497] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59667 has claimed it. 00:06:47.289 [2024-11-20 17:41:14.426582] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.857 ERROR: process (pid: 59690) is no longer running 00:06:47.857 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59690) - No such process 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59667 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59667 00:06:47.857 17:41:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59667 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59667 ']' 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59667 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59667 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.116 killing process with pid 59667 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59667' 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59667 00:06:48.116 17:41:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59667 00:06:50.651 00:06:50.651 real 0m5.050s 00:06:50.651 user 0m5.214s 00:06:50.651 sys 0m0.801s 00:06:50.652 17:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.652 17:41:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.652 ************************************ 00:06:50.652 END TEST locking_app_on_locked_coremask 00:06:50.652 ************************************ 00:06:50.912 17:41:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:50.912 17:41:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.912 17:41:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.912 17:41:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.912 ************************************ 00:06:50.912 START TEST locking_overlapped_coremask 00:06:50.912 ************************************ 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59758 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59758 /var/tmp/spdk.sock 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59758 ']' 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.912 17:41:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.912 [2024-11-20 17:41:17.981603] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:50.912 [2024-11-20 17:41:17.981735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59758 ] 00:06:51.172 [2024-11-20 17:41:18.162088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.172 [2024-11-20 17:41:18.302125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.172 [2024-11-20 17:41:18.302266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.172 [2024-11-20 17:41:18.302302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59776 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59776 /var/tmp/spdk2.sock 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59776 /var/tmp/spdk2.sock 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.109 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59776 /var/tmp/spdk2.sock 00:06:52.110 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59776 ']' 00:06:52.110 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.110 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.110 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.110 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.110 17:41:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 [2024-11-20 17:41:19.365620] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:52.369 [2024-11-20 17:41:19.365759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59776 ] 00:06:52.369 [2024-11-20 17:41:19.541929] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59758 has claimed it. 00:06:52.369 [2024-11-20 17:41:19.542015] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.938 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59776) - No such process 00:06:52.938 ERROR: process (pid: 59776) is no longer running 00:06:52.938 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.938 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:52.938 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:52.938 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.938 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.938 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59758 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59758 ']' 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59758 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59758 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.939 killing process with pid 59758 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59758' 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59758 00:06:52.939 17:41:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59758 00:06:55.526 00:06:55.526 real 0m4.783s 00:06:55.526 user 0m13.074s 00:06:55.526 sys 0m0.620s 00:06:55.526 17:41:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.526 17:41:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.526 ************************************ 00:06:55.526 END TEST locking_overlapped_coremask 00:06:55.526 ************************************ 00:06:55.785 17:41:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.785 17:41:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.785 17:41:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.785 17:41:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.785 ************************************ 00:06:55.785 START TEST locking_overlapped_coremask_via_rpc 00:06:55.785 ************************************ 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59851 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59851 /var/tmp/spdk.sock 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59851 ']' 00:06:55.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.785 17:41:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.785 [2024-11-20 17:41:22.825554] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:55.785 [2024-11-20 17:41:22.825768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59851 ] 00:06:56.044 [2024-11-20 17:41:23.000036] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.044 [2024-11-20 17:41:23.000127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.044 [2024-11-20 17:41:23.133420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.044 [2024-11-20 17:41:23.133596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.044 [2024-11-20 17:41:23.133628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59869 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59869 /var/tmp/spdk2.sock 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59869 ']' 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.979 17:41:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.238 [2024-11-20 17:41:24.239564] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:57.238 [2024-11-20 17:41:24.239825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59869 ] 00:06:57.498 [2024-11-20 17:41:24.425004] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.498 [2024-11-20 17:41:24.429083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.758 [2024-11-20 17:41:24.678305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.758 [2024-11-20 17:41:24.678326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.758 [2024-11-20 17:41:24.678334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.331 [2024-11-20 17:41:26.914254] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59851 has claimed it. 00:07:00.331 request: 00:07:00.331 { 00:07:00.331 "method": "framework_enable_cpumask_locks", 00:07:00.331 "req_id": 1 00:07:00.331 } 00:07:00.331 Got JSON-RPC error response 00:07:00.331 response: 00:07:00.331 { 00:07:00.331 "code": -32603, 00:07:00.331 "message": "Failed to claim CPU core: 2" 00:07:00.331 } 00:07:00.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59851 /var/tmp/spdk.sock 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59851 ']' 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.331 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59869 /var/tmp/spdk2.sock 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59869 ']' 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.331 ************************************ 00:07:00.331 END TEST locking_overlapped_coremask_via_rpc 00:07:00.331 ************************************ 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.331 00:07:00.331 real 0m4.688s 00:07:00.331 user 0m1.456s 00:07:00.331 sys 0m0.220s 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.331 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.331 17:41:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:00.331 17:41:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59851 ]] 00:07:00.331 17:41:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59851 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59851 ']' 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59851 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59851 00:07:00.331 killing process with pid 59851 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59851' 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59851 00:07:00.331 17:41:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59851 00:07:03.665 17:41:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59869 ]] 00:07:03.665 17:41:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59869 00:07:03.665 17:41:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59869 ']' 00:07:03.665 17:41:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59869 00:07:03.665 17:41:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59869 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59869' 00:07:03.666 killing process with pid 59869 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59869 00:07:03.666 17:41:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59869 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59851 ]] 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59851 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59851 ']' 00:07:06.203 Process with pid 59851 is not found 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59851 00:07:06.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59851) - No such process 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59851 is not found' 00:07:06.203 Process with pid 59869 is not found 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59869 ]] 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59869 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59869 ']' 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59869 00:07:06.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59869) - No such process 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59869 is not found' 00:07:06.203 17:41:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.203 ************************************ 00:07:06.203 END TEST cpu_locks 00:07:06.203 ************************************ 00:07:06.203 00:07:06.203 real 0m57.160s 00:07:06.203 user 1m35.374s 00:07:06.203 sys 0m8.027s 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.203 17:41:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.203 ************************************ 00:07:06.203 END TEST event 00:07:06.203 ************************************ 00:07:06.203 00:07:06.203 real 1m29.908s 00:07:06.203 user 2m40.767s 00:07:06.203 sys 0m12.112s 00:07:06.203 17:41:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.203 17:41:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.203 17:41:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:06.203 17:41:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.203 17:41:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.203 17:41:33 -- common/autotest_common.sh@10 -- # set +x 00:07:06.203 ************************************ 00:07:06.203 START TEST thread 00:07:06.203 ************************************ 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:06.203 * Looking for test storage... 00:07:06.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.203 17:41:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.203 17:41:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.203 17:41:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.203 17:41:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.203 17:41:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.203 17:41:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.203 17:41:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.203 17:41:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.203 17:41:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.203 17:41:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.203 17:41:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.203 17:41:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:06.203 17:41:33 thread -- scripts/common.sh@345 -- # : 1 00:07:06.203 17:41:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.203 17:41:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.203 17:41:33 thread -- scripts/common.sh@365 -- # decimal 1 00:07:06.203 17:41:33 thread -- scripts/common.sh@353 -- # local d=1 00:07:06.203 17:41:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.203 17:41:33 thread -- scripts/common.sh@355 -- # echo 1 00:07:06.203 17:41:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.203 17:41:33 thread -- scripts/common.sh@366 -- # decimal 2 00:07:06.203 17:41:33 thread -- scripts/common.sh@353 -- # local d=2 00:07:06.203 17:41:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.203 17:41:33 thread -- scripts/common.sh@355 -- # echo 2 00:07:06.203 17:41:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.203 17:41:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.203 17:41:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.203 17:41:33 thread -- scripts/common.sh@368 -- # return 0 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.203 --rc genhtml_branch_coverage=1 00:07:06.203 --rc genhtml_function_coverage=1 00:07:06.203 --rc genhtml_legend=1 00:07:06.203 --rc geninfo_all_blocks=1 00:07:06.203 --rc geninfo_unexecuted_blocks=1 00:07:06.203 00:07:06.203 ' 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.203 --rc genhtml_branch_coverage=1 00:07:06.203 --rc genhtml_function_coverage=1 00:07:06.203 --rc genhtml_legend=1 00:07:06.203 --rc geninfo_all_blocks=1 00:07:06.203 --rc geninfo_unexecuted_blocks=1 00:07:06.203 00:07:06.203 ' 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.203 --rc genhtml_branch_coverage=1 00:07:06.203 --rc genhtml_function_coverage=1 00:07:06.203 --rc genhtml_legend=1 00:07:06.203 --rc geninfo_all_blocks=1 00:07:06.203 --rc geninfo_unexecuted_blocks=1 00:07:06.203 00:07:06.203 ' 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.203 --rc genhtml_branch_coverage=1 00:07:06.203 --rc genhtml_function_coverage=1 00:07:06.203 --rc genhtml_legend=1 00:07:06.203 --rc geninfo_all_blocks=1 00:07:06.203 --rc geninfo_unexecuted_blocks=1 00:07:06.203 00:07:06.203 ' 00:07:06.203 17:41:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.203 17:41:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.203 ************************************ 00:07:06.203 START TEST thread_poller_perf 00:07:06.203 ************************************ 00:07:06.204 17:41:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.204 [2024-11-20 17:41:33.352272] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:06.204 [2024-11-20 17:41:33.352458] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:07:06.464 [2024-11-20 17:41:33.512718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.725 [2024-11-20 17:41:33.645342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.725 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.115 [2024-11-20T17:41:35.291Z] ====================================== 00:07:08.115 [2024-11-20T17:41:35.291Z] busy:2300219092 (cyc) 00:07:08.115 [2024-11-20T17:41:35.291Z] total_run_count: 357000 00:07:08.115 [2024-11-20T17:41:35.291Z] tsc_hz: 2290000000 (cyc) 00:07:08.115 [2024-11-20T17:41:35.291Z] ====================================== 00:07:08.115 [2024-11-20T17:41:35.291Z] poller_cost: 6443 (cyc), 2813 (nsec) 00:07:08.115 00:07:08.115 real 0m1.592s 00:07:08.115 user 0m1.386s 00:07:08.115 sys 0m0.096s 00:07:08.115 17:41:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.115 17:41:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.115 ************************************ 00:07:08.115 END TEST thread_poller_perf 00:07:08.115 ************************************ 00:07:08.115 17:41:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.115 17:41:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:08.115 17:41:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.115 17:41:34 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.116 ************************************ 00:07:08.116 START TEST thread_poller_perf 00:07:08.116 ************************************ 00:07:08.116 17:41:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.116 [2024-11-20 17:41:35.015689] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:08.116 [2024-11-20 17:41:35.015961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60112 ] 00:07:08.116 [2024-11-20 17:41:35.200856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.375 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:08.375 [2024-11-20 17:41:35.321882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.747 [2024-11-20T17:41:36.923Z] ====================================== 00:07:09.747 [2024-11-20T17:41:36.923Z] busy:2293617230 (cyc) 00:07:09.747 [2024-11-20T17:41:36.923Z] total_run_count: 4846000 00:07:09.747 [2024-11-20T17:41:36.923Z] tsc_hz: 2290000000 (cyc) 00:07:09.747 [2024-11-20T17:41:36.923Z] ====================================== 00:07:09.747 [2024-11-20T17:41:36.923Z] poller_cost: 473 (cyc), 206 (nsec) 00:07:09.747 00:07:09.747 real 0m1.604s 00:07:09.747 user 0m1.391s 00:07:09.747 sys 0m0.105s 00:07:09.747 17:41:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.747 17:41:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.747 ************************************ 00:07:09.747 END TEST thread_poller_perf 00:07:09.747 ************************************ 00:07:09.747 17:41:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:09.747 00:07:09.747 real 0m3.542s 00:07:09.747 user 0m2.931s 00:07:09.747 sys 0m0.405s 00:07:09.747 ************************************ 00:07:09.747 END TEST thread 00:07:09.747 ************************************ 00:07:09.747 17:41:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.747 17:41:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.747 17:41:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:09.747 17:41:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:09.747 17:41:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.747 17:41:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.747 17:41:36 -- common/autotest_common.sh@10 -- # set +x 00:07:09.747 ************************************ 00:07:09.747 START TEST app_cmdline 00:07:09.747 ************************************ 00:07:09.747 17:41:36 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:09.747 * Looking for test storage... 00:07:09.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:09.747 17:41:36 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.747 17:41:36 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.747 17:41:36 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.747 17:41:36 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:09.747 17:41:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.748 17:41:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:09.748 17:41:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.748 17:41:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.748 17:41:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.748 17:41:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.748 --rc genhtml_branch_coverage=1 00:07:09.748 --rc genhtml_function_coverage=1 00:07:09.748 --rc genhtml_legend=1 00:07:09.748 --rc geninfo_all_blocks=1 00:07:09.748 --rc geninfo_unexecuted_blocks=1 00:07:09.748 00:07:09.748 ' 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.748 --rc genhtml_branch_coverage=1 00:07:09.748 --rc genhtml_function_coverage=1 00:07:09.748 --rc genhtml_legend=1 00:07:09.748 --rc geninfo_all_blocks=1 00:07:09.748 --rc geninfo_unexecuted_blocks=1 00:07:09.748 00:07:09.748 ' 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.748 --rc genhtml_branch_coverage=1 00:07:09.748 --rc genhtml_function_coverage=1 00:07:09.748 --rc genhtml_legend=1 00:07:09.748 --rc geninfo_all_blocks=1 00:07:09.748 --rc geninfo_unexecuted_blocks=1 00:07:09.748 00:07:09.748 ' 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.748 --rc genhtml_branch_coverage=1 00:07:09.748 --rc genhtml_function_coverage=1 00:07:09.748 --rc genhtml_legend=1 00:07:09.748 --rc geninfo_all_blocks=1 00:07:09.748 --rc geninfo_unexecuted_blocks=1 00:07:09.748 00:07:09.748 ' 00:07:09.748 17:41:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:09.748 17:41:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60201 00:07:09.748 17:41:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:09.748 17:41:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60201 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60201 ']' 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.748 17:41:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.005 [2024-11-20 17:41:37.037649] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:10.005 [2024-11-20 17:41:37.037917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:07:10.263 [2024-11-20 17:41:37.225778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.263 [2024-11-20 17:41:37.343992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.198 17:41:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.198 17:41:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:11.198 17:41:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:11.457 { 00:07:11.457 "version": "SPDK v25.01-pre git sha1 09ac735c8", 00:07:11.457 "fields": { 00:07:11.457 "major": 25, 00:07:11.457 "minor": 1, 00:07:11.457 "patch": 0, 00:07:11.457 "suffix": "-pre", 00:07:11.457 "commit": "09ac735c8" 00:07:11.457 } 00:07:11.457 } 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.457 17:41:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:11.457 17:41:38 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.717 request: 00:07:11.717 { 00:07:11.717 "method": "env_dpdk_get_mem_stats", 00:07:11.717 "req_id": 1 00:07:11.717 } 00:07:11.717 Got JSON-RPC error response 00:07:11.717 response: 00:07:11.717 { 00:07:11.717 "code": -32601, 00:07:11.717 "message": "Method not found" 00:07:11.717 } 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.717 17:41:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60201 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60201 ']' 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60201 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60201 00:07:11.717 killing process with pid 60201 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60201' 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@973 -- # kill 60201 00:07:11.717 17:41:38 app_cmdline -- common/autotest_common.sh@978 -- # wait 60201 00:07:14.251 ************************************ 00:07:14.251 END TEST app_cmdline 00:07:14.251 ************************************ 00:07:14.251 00:07:14.251 real 0m4.700s 00:07:14.251 user 0m4.981s 00:07:14.251 sys 0m0.661s 00:07:14.251 17:41:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.251 17:41:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.511 17:41:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:14.511 17:41:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.511 17:41:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.511 17:41:41 -- common/autotest_common.sh@10 -- # set +x 00:07:14.511 ************************************ 00:07:14.511 START TEST version 00:07:14.511 ************************************ 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:14.511 * Looking for test storage... 00:07:14.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.511 17:41:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.511 17:41:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.511 17:41:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.511 17:41:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.511 17:41:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.511 17:41:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.511 17:41:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.511 17:41:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.511 17:41:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.511 17:41:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.511 17:41:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.511 17:41:41 version -- scripts/common.sh@344 -- # case "$op" in 00:07:14.511 17:41:41 version -- scripts/common.sh@345 -- # : 1 00:07:14.511 17:41:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.511 17:41:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.511 17:41:41 version -- scripts/common.sh@365 -- # decimal 1 00:07:14.511 17:41:41 version -- scripts/common.sh@353 -- # local d=1 00:07:14.511 17:41:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.511 17:41:41 version -- scripts/common.sh@355 -- # echo 1 00:07:14.511 17:41:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.511 17:41:41 version -- scripts/common.sh@366 -- # decimal 2 00:07:14.511 17:41:41 version -- scripts/common.sh@353 -- # local d=2 00:07:14.511 17:41:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.511 17:41:41 version -- scripts/common.sh@355 -- # echo 2 00:07:14.511 17:41:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.511 17:41:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.511 17:41:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.511 17:41:41 version -- scripts/common.sh@368 -- # return 0 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.511 --rc genhtml_branch_coverage=1 00:07:14.511 --rc genhtml_function_coverage=1 00:07:14.511 --rc genhtml_legend=1 00:07:14.511 --rc geninfo_all_blocks=1 00:07:14.511 --rc geninfo_unexecuted_blocks=1 00:07:14.511 00:07:14.511 ' 00:07:14.511 17:41:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.511 --rc genhtml_branch_coverage=1 00:07:14.512 --rc genhtml_function_coverage=1 00:07:14.512 --rc genhtml_legend=1 00:07:14.512 --rc geninfo_all_blocks=1 00:07:14.512 --rc geninfo_unexecuted_blocks=1 00:07:14.512 00:07:14.512 ' 00:07:14.512 17:41:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.512 --rc genhtml_branch_coverage=1 00:07:14.512 --rc genhtml_function_coverage=1 00:07:14.512 --rc genhtml_legend=1 00:07:14.512 --rc geninfo_all_blocks=1 00:07:14.512 --rc geninfo_unexecuted_blocks=1 00:07:14.512 00:07:14.512 ' 00:07:14.512 17:41:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.512 --rc genhtml_branch_coverage=1 00:07:14.512 --rc genhtml_function_coverage=1 00:07:14.512 --rc genhtml_legend=1 00:07:14.512 --rc geninfo_all_blocks=1 00:07:14.512 --rc geninfo_unexecuted_blocks=1 00:07:14.512 00:07:14.512 ' 00:07:14.512 17:41:41 version -- app/version.sh@17 -- # get_header_version major 00:07:14.512 17:41:41 version -- app/version.sh@14 -- # cut -f2 00:07:14.512 17:41:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.512 17:41:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.772 17:41:41 version -- app/version.sh@17 -- # major=25 00:07:14.772 17:41:41 version -- app/version.sh@18 -- # get_header_version minor 00:07:14.772 17:41:41 version -- app/version.sh@14 -- # cut -f2 00:07:14.772 17:41:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.772 17:41:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.772 17:41:41 version -- app/version.sh@18 -- # minor=1 00:07:14.772 17:41:41 version -- app/version.sh@19 -- # get_header_version patch 00:07:14.772 17:41:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.772 17:41:41 version -- app/version.sh@14 -- # cut -f2 00:07:14.772 17:41:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.772 17:41:41 version -- app/version.sh@19 -- # patch=0 00:07:14.772 17:41:41 version -- app/version.sh@20 -- # get_header_version suffix 00:07:14.772 17:41:41 version -- app/version.sh@14 -- # cut -f2 00:07:14.772 17:41:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.772 17:41:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.772 17:41:41 version -- app/version.sh@20 -- # suffix=-pre 00:07:14.772 17:41:41 version -- app/version.sh@22 -- # version=25.1 00:07:14.772 17:41:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:14.772 17:41:41 version -- app/version.sh@28 -- # version=25.1rc0 00:07:14.772 17:41:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:14.772 17:41:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:14.772 17:41:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:14.772 17:41:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:14.772 ************************************ 00:07:14.772 END TEST version 00:07:14.772 ************************************ 00:07:14.772 00:07:14.772 real 0m0.318s 00:07:14.772 user 0m0.201s 00:07:14.772 sys 0m0.166s 00:07:14.772 17:41:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.772 17:41:41 version -- common/autotest_common.sh@10 -- # set +x 00:07:14.772 17:41:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:14.772 17:41:41 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:14.772 17:41:41 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:14.772 17:41:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.772 17:41:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.772 17:41:41 -- common/autotest_common.sh@10 -- # set +x 00:07:14.772 ************************************ 00:07:14.772 START TEST bdev_raid 00:07:14.772 ************************************ 00:07:14.772 17:41:41 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:15.032 * Looking for test storage... 00:07:15.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:15.032 17:41:41 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.032 17:41:41 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.032 17:41:41 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.032 17:41:42 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:15.032 17:41:42 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.033 17:41:42 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.033 --rc genhtml_branch_coverage=1 00:07:15.033 --rc genhtml_function_coverage=1 00:07:15.033 --rc genhtml_legend=1 00:07:15.033 --rc geninfo_all_blocks=1 00:07:15.033 --rc geninfo_unexecuted_blocks=1 00:07:15.033 00:07:15.033 ' 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.033 --rc genhtml_branch_coverage=1 00:07:15.033 --rc genhtml_function_coverage=1 00:07:15.033 --rc genhtml_legend=1 00:07:15.033 --rc geninfo_all_blocks=1 00:07:15.033 --rc geninfo_unexecuted_blocks=1 00:07:15.033 00:07:15.033 ' 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.033 --rc genhtml_branch_coverage=1 00:07:15.033 --rc genhtml_function_coverage=1 00:07:15.033 --rc genhtml_legend=1 00:07:15.033 --rc geninfo_all_blocks=1 00:07:15.033 --rc geninfo_unexecuted_blocks=1 00:07:15.033 00:07:15.033 ' 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.033 --rc genhtml_branch_coverage=1 00:07:15.033 --rc genhtml_function_coverage=1 00:07:15.033 --rc genhtml_legend=1 00:07:15.033 --rc geninfo_all_blocks=1 00:07:15.033 --rc geninfo_unexecuted_blocks=1 00:07:15.033 00:07:15.033 ' 00:07:15.033 17:41:42 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:15.033 17:41:42 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:15.033 17:41:42 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:15.033 17:41:42 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:15.033 17:41:42 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:15.033 17:41:42 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:15.033 17:41:42 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.033 17:41:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.033 ************************************ 00:07:15.033 START TEST raid1_resize_data_offset_test 00:07:15.033 ************************************ 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:15.033 Process raid pid: 60394 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60394 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60394' 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60394 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60394 ']' 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.033 17:41:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.292 [2024-11-20 17:41:42.227930] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:15.292 [2024-11-20 17:41:42.228333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.292 [2024-11-20 17:41:42.410878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.551 [2024-11-20 17:41:42.529543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.810 [2024-11-20 17:41:42.739881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.810 [2024-11-20 17:41:42.740037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.069 malloc0 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.069 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.328 malloc1 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.328 null0 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.328 [2024-11-20 17:41:43.287168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:16.328 [2024-11-20 17:41:43.289069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:16.328 [2024-11-20 17:41:43.289131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:16.328 [2024-11-20 17:41:43.289289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:16.328 [2024-11-20 17:41:43.289303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:16.328 [2024-11-20 17:41:43.289613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:16.328 [2024-11-20 17:41:43.289808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:16.328 [2024-11-20 17:41:43.289822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:16.328 [2024-11-20 17:41:43.290037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.328 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.329 [2024-11-20 17:41:43.347085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.329 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.899 malloc2 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.899 [2024-11-20 17:41:43.914075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:16.899 [2024-11-20 17:41:43.931792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.899 [2024-11-20 17:41:43.933877] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60394 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60394 ']' 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60394 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.899 17:41:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60394 00:07:16.899 17:41:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.899 17:41:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.899 17:41:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60394' 00:07:16.899 killing process with pid 60394 00:07:16.899 17:41:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60394 00:07:16.899 17:41:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60394 00:07:16.899 [2024-11-20 17:41:44.025685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.899 [2024-11-20 17:41:44.026778] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:16.899 [2024-11-20 17:41:44.026943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.899 [2024-11-20 17:41:44.027001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:16.899 [2024-11-20 17:41:44.067664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.899 [2024-11-20 17:41:44.068097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.899 [2024-11-20 17:41:44.068163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:19.428 [2024-11-20 17:41:45.998817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.367 17:41:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:20.367 00:07:20.367 real 0m5.180s 00:07:20.367 user 0m5.076s 00:07:20.367 sys 0m0.619s 00:07:20.367 ************************************ 00:07:20.367 END TEST raid1_resize_data_offset_test 00:07:20.367 ************************************ 00:07:20.367 17:41:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.367 17:41:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.367 17:41:47 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:20.367 17:41:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.367 17:41:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.367 17:41:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.367 ************************************ 00:07:20.367 START TEST raid0_resize_superblock_test 00:07:20.368 ************************************ 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60483 00:07:20.368 Process raid pid: 60483 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60483' 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60483 00:07:20.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60483 ']' 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.368 17:41:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.368 [2024-11-20 17:41:47.453447] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:20.368 [2024-11-20 17:41:47.453603] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.627 [2024-11-20 17:41:47.639116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.627 [2024-11-20 17:41:47.781082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.886 [2024-11-20 17:41:48.031131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.886 [2024-11-20 17:41:48.031198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.144 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.144 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:21.144 17:41:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:21.144 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.144 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.082 malloc0 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.082 [2024-11-20 17:41:48.951428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:22.082 [2024-11-20 17:41:48.951511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.082 [2024-11-20 17:41:48.951551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:22.082 [2024-11-20 17:41:48.951567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.082 [2024-11-20 17:41:48.954226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.082 [2024-11-20 17:41:48.954274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:22.082 pt0 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.082 17:41:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.082 c3dcffd9-6e7a-42e3-a5ba-b89b5cff2955 00:07:22.082 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.082 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:22.082 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.082 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.083 66d0c3d0-d553-4a4c-b2ce-98ae770197a7 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.083 6aa69808-7e87-4488-a0f8-a5a0a719a4f5 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.083 [2024-11-20 17:41:49.163042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 66d0c3d0-d553-4a4c-b2ce-98ae770197a7 is claimed 00:07:22.083 [2024-11-20 17:41:49.163223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6aa69808-7e87-4488-a0f8-a5a0a719a4f5 is claimed 00:07:22.083 [2024-11-20 17:41:49.163423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:22.083 [2024-11-20 17:41:49.163450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:22.083 [2024-11-20 17:41:49.163780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.083 [2024-11-20 17:41:49.164045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:22.083 [2024-11-20 17:41:49.164061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:22.083 [2024-11-20 17:41:49.164271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:22.083 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 [2024-11-20 17:41:49.271211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 [2024-11-20 17:41:49.319181] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:22.342 [2024-11-20 17:41:49.319279] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '66d0c3d0-d553-4a4c-b2ce-98ae770197a7' was resized: old size 131072, new size 204800 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.342 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 [2024-11-20 17:41:49.330925] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:22.342 [2024-11-20 17:41:49.331041] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6aa69808-7e87-4488-a0f8-a5a0a719a4f5' was resized: old size 131072, new size 204800 00:07:22.343 [2024-11-20 17:41:49.331090] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 [2024-11-20 17:41:49.442837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 [2024-11-20 17:41:49.490548] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:22.343 [2024-11-20 17:41:49.490706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:22.343 [2024-11-20 17:41:49.490750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.343 [2024-11-20 17:41:49.490806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:22.343 [2024-11-20 17:41:49.491049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.343 [2024-11-20 17:41:49.491147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.343 [2024-11-20 17:41:49.491217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 [2024-11-20 17:41:49.502355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:22.343 [2024-11-20 17:41:49.502470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.343 [2024-11-20 17:41:49.502526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:22.343 [2024-11-20 17:41:49.502576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.343 [2024-11-20 17:41:49.505250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.343 [2024-11-20 17:41:49.505340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:22.343 pt0 00:07:22.343 [2024-11-20 17:41:49.507448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 66d0c3d0-d553-4a4c-b2ce-98ae770197a7 00:07:22.343 [2024-11-20 17:41:49.507607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 66d0c3d0-d553-4a4c-b2ce-98ae770197a7 is claimed 00:07:22.343 [2024-11-20 17:41:49.507808] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6aa69808-7e87-4488- 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.343 a0f8-a5a0a719a4f5 00:07:22.343 [2024-11-20 17:41:49.507917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6aa69808-7e87-4488-a0f8-a5a0a719a4f5 is claimed 00:07:22.343 [2024-11-20 17:41:49.508173] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 6aa69808-7e87-44 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:22.343 88-a0f8-a5a0a719a4f5 (2) smaller than existing raid bdev Raid (3) 00:07:22.343 [2024-11-20 17:41:49.508297] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 66d0c3d0-d553-4a4c-b2ce-98ae770197a7: File exists 00:07:22.343 [2024-11-20 17:41:49.508413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:22.343 [2024-11-20 17:41:49.508462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.343 [2024-11-20 17:41:49.508828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:22.343 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 [2024-11-20 17:41:49.509081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:22.343 [2024-11-20 17:41:49.509143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:22.343 [2024-11-20 17:41:49.509407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:22.603 [2024-11-20 17:41:49.530572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60483 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60483 ']' 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60483 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60483 00:07:22.603 killing process with pid 60483 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60483' 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60483 00:07:22.603 [2024-11-20 17:41:49.611371] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.603 17:41:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60483 00:07:22.603 [2024-11-20 17:41:49.611500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.603 [2024-11-20 17:41:49.611571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.603 [2024-11-20 17:41:49.611584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:24.536 [2024-11-20 17:41:51.222702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.475 17:41:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:25.475 00:07:25.475 real 0m5.153s 00:07:25.475 user 0m5.179s 00:07:25.475 sys 0m0.770s 00:07:25.475 ************************************ 00:07:25.475 END TEST raid0_resize_superblock_test 00:07:25.475 ************************************ 00:07:25.475 17:41:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.475 17:41:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.475 17:41:52 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:25.475 17:41:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.475 17:41:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.475 17:41:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.475 ************************************ 00:07:25.475 START TEST raid1_resize_superblock_test 00:07:25.475 ************************************ 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60587 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.475 Process raid pid: 60587 00:07:25.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60587' 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60587 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60587 ']' 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.475 17:41:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.735 [2024-11-20 17:41:52.674989] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:25.735 [2024-11-20 17:41:52.675295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.735 [2024-11-20 17:41:52.860813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.994 [2024-11-20 17:41:53.012389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.252 [2024-11-20 17:41:53.267126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.252 [2024-11-20 17:41:53.267283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.511 17:41:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.511 17:41:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.511 17:41:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:26.511 17:41:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.511 17:41:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.448 malloc0 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.448 [2024-11-20 17:41:54.289003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:27.448 [2024-11-20 17:41:54.289143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.448 [2024-11-20 17:41:54.289206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:27.448 [2024-11-20 17:41:54.289311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.448 [2024-11-20 17:41:54.292004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.448 [2024-11-20 17:41:54.292109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:27.448 pt0 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.448 634654c9-7b8e-4d2d-ad6e-7ab4227a8b5f 00:07:27.448 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 2e2456da-c8ec-48aa-b8e6-cb51076ec97a 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 b0e8838f-ba0b-46c4-8673-03bcb5d1ee25 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 [2024-11-20 17:41:54.490645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e2456da-c8ec-48aa-b8e6-cb51076ec97a is claimed 00:07:27.449 [2024-11-20 17:41:54.490750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0e8838f-ba0b-46c4-8673-03bcb5d1ee25 is claimed 00:07:27.449 [2024-11-20 17:41:54.490899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:27.449 [2024-11-20 17:41:54.490918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:27.449 [2024-11-20 17:41:54.491238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.449 [2024-11-20 17:41:54.491462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:27.449 [2024-11-20 17:41:54.491475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:27.449 [2024-11-20 17:41:54.491644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.449 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 [2024-11-20 17:41:54.606805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.708 [2024-11-20 17:41:54.662816] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.708 [2024-11-20 17:41:54.662959] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2e2456da-c8ec-48aa-b8e6-cb51076ec97a' was resized: old size 131072, new size 204800 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.708 [2024-11-20 17:41:54.674752] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.708 [2024-11-20 17:41:54.674875] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b0e8838f-ba0b-46c4-8673-03bcb5d1ee25' was resized: old size 131072, new size 204800 00:07:27.708 [2024-11-20 17:41:54.674932] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:27.708 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 [2024-11-20 17:41:54.778480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 [2024-11-20 17:41:54.814224] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:27.709 [2024-11-20 17:41:54.814320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:27.709 [2024-11-20 17:41:54.814370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:27.709 [2024-11-20 17:41:54.814571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.709 [2024-11-20 17:41:54.814826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.709 [2024-11-20 17:41:54.814895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.709 [2024-11-20 17:41:54.814910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 [2024-11-20 17:41:54.826027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:27.709 [2024-11-20 17:41:54.826100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.709 [2024-11-20 17:41:54.826124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:27.709 [2024-11-20 17:41:54.826183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.709 [2024-11-20 17:41:54.828915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.709 [2024-11-20 17:41:54.828960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:27.709 pt0 00:07:27.709 [2024-11-20 17:41:54.830803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2e2456da-c8ec-48aa-b8e6-cb51076ec97a 00:07:27.709 [2024-11-20 17:41:54.830883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e2456da-c8ec-48aa-b8e6-cb51076ec97a is claimed 00:07:27.709 [2024-11-20 17:41:54.830988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b0e8838f-ba0b-46c4-8673-03bcb5d1ee25 00:07:27.709 [2024-11-20 17:41:54.831007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0e8838f-ba0b-46c4-8673-03bcb5d1ee25 is claimed 00:07:27.709 [2024-11-20 17:41:54.831232] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b0e8838f-ba0b-46c4-8673-03bcb5d1ee25 (2) smaller than existing raid bdev Raid (3) 00:07:27.709 [2024-11-20 17:41:54.831259] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2e2456da-c8ec-48aa-b8e6-cb51076ec97a: File exists 00:07:27.709 [2024-11-20 17:41:54.831295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:27.709 [2024-11-20 17:41:54.831308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:27.709 [2024-11-20 17:41:54.831582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:27.709 [2024-11-20 17:41:54.831755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:27.709 [2024-11-20 17:41:54.831764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:27.709 [2024-11-20 17:41:54.831929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:27.709 [2024-11-20 17:41:54.855150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.709 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60587 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60587 ']' 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60587 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60587 00:07:27.969 killing process with pid 60587 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60587' 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60587 00:07:27.969 [2024-11-20 17:41:54.938749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.969 [2024-11-20 17:41:54.938877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.969 17:41:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60587 00:07:27.969 [2024-11-20 17:41:54.938951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.969 [2024-11-20 17:41:54.938961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:29.875 [2024-11-20 17:41:56.565611] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.826 ************************************ 00:07:30.826 END TEST raid1_resize_superblock_test 00:07:30.826 ************************************ 00:07:30.826 17:41:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:30.826 00:07:30.826 real 0m5.395s 00:07:30.826 user 0m5.470s 00:07:30.826 sys 0m0.803s 00:07:30.826 17:41:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.826 17:41:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.085 17:41:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:31.085 17:41:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:31.085 17:41:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:31.085 17:41:58 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:31.085 17:41:58 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:31.085 17:41:58 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:31.085 17:41:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.085 17:41:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.085 17:41:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.085 ************************************ 00:07:31.085 START TEST raid_function_test_raid0 00:07:31.085 ************************************ 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60695 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60695' 00:07:31.085 Process raid pid: 60695 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60695 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60695 ']' 00:07:31.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.085 17:41:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:31.085 [2024-11-20 17:41:58.160996] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:31.085 [2024-11-20 17:41:58.161265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.346 [2024-11-20 17:41:58.334157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.346 [2024-11-20 17:41:58.481521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.604 [2024-11-20 17:41:58.735814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.604 [2024-11-20 17:41:58.735988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.863 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.863 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:31.863 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:31.863 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.863 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.123 Base_1 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.123 Base_2 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.123 [2024-11-20 17:41:59.134738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:32.123 [2024-11-20 17:41:59.137321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:32.123 [2024-11-20 17:41:59.137427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.123 [2024-11-20 17:41:59.137441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:32.123 [2024-11-20 17:41:59.137802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.123 [2024-11-20 17:41:59.138040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.123 [2024-11-20 17:41:59.138053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:32.123 [2024-11-20 17:41:59.138276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:32.123 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:32.383 [2024-11-20 17:41:59.414314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:32.383 /dev/nbd0 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.383 1+0 records in 00:07:32.383 1+0 records out 00:07:32.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063038 s, 6.5 MB/s 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:32.383 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:32.642 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.642 { 00:07:32.642 "nbd_device": "/dev/nbd0", 00:07:32.642 "bdev_name": "raid" 00:07:32.642 } 00:07:32.642 ]' 00:07:32.642 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.642 { 00:07:32.642 "nbd_device": "/dev/nbd0", 00:07:32.642 "bdev_name": "raid" 00:07:32.642 } 00:07:32.643 ]' 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:32.643 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:32.902 4096+0 records in 00:07:32.902 4096+0 records out 00:07:32.902 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0374059 s, 56.1 MB/s 00:07:32.902 17:41:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:33.161 4096+0 records in 00:07:33.161 4096+0 records out 00:07:33.161 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.267969 s, 7.8 MB/s 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:33.161 128+0 records in 00:07:33.161 128+0 records out 00:07:33.161 65536 bytes (66 kB, 64 KiB) copied, 0.00103801 s, 63.1 MB/s 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:33.161 2035+0 records in 00:07:33.161 2035+0 records out 00:07:33.161 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0104675 s, 99.5 MB/s 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:33.161 456+0 records in 00:07:33.161 456+0 records out 00:07:33.161 233472 bytes (233 kB, 228 KiB) copied, 0.00396624 s, 58.9 MB/s 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.161 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:33.420 [2024-11-20 17:42:00.472621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:33.420 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60695 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60695 ']' 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60695 00:07:33.679 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60695 00:07:33.680 killing process with pid 60695 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60695' 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60695 00:07:33.680 [2024-11-20 17:42:00.818495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.680 [2024-11-20 17:42:00.818609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.680 17:42:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60695 00:07:33.680 [2024-11-20 17:42:00.818662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.680 [2024-11-20 17:42:00.818678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:33.938 [2024-11-20 17:42:01.035473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.315 17:42:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:35.315 00:07:35.315 real 0m4.187s 00:07:35.315 user 0m4.757s 00:07:35.315 sys 0m1.116s 00:07:35.315 ************************************ 00:07:35.315 END TEST raid_function_test_raid0 00:07:35.315 ************************************ 00:07:35.315 17:42:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.315 17:42:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:35.315 17:42:02 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:35.315 17:42:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.315 17:42:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.315 17:42:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.315 ************************************ 00:07:35.315 START TEST raid_function_test_concat 00:07:35.315 ************************************ 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60824 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60824' 00:07:35.315 Process raid pid: 60824 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60824 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60824 ']' 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.315 17:42:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:35.315 [2024-11-20 17:42:02.407560] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:35.315 [2024-11-20 17:42:02.407757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.575 [2024-11-20 17:42:02.587023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.575 [2024-11-20 17:42:02.715167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.835 [2024-11-20 17:42:02.936002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.835 [2024-11-20 17:42:02.936062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.404 Base_1 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.404 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.404 Base_2 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.405 [2024-11-20 17:42:03.452855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:36.405 [2024-11-20 17:42:03.454912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:36.405 [2024-11-20 17:42:03.454996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:36.405 [2024-11-20 17:42:03.455009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:36.405 [2024-11-20 17:42:03.455297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.405 [2024-11-20 17:42:03.455476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:36.405 [2024-11-20 17:42:03.455486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:36.405 [2024-11-20 17:42:03.455672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:36.405 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:36.665 [2024-11-20 17:42:03.756417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:36.665 /dev/nbd0 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.665 1+0 records in 00:07:36.665 1+0 records out 00:07:36.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566536 s, 7.2 MB/s 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.665 17:42:03 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:36.924 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:36.924 { 00:07:36.924 "nbd_device": "/dev/nbd0", 00:07:36.924 "bdev_name": "raid" 00:07:36.924 } 00:07:36.924 ]' 00:07:36.924 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:36.924 { 00:07:36.924 "nbd_device": "/dev/nbd0", 00:07:36.924 "bdev_name": "raid" 00:07:36.924 } 00:07:36.924 ]' 00:07:36.924 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:37.183 4096+0 records in 00:07:37.183 4096+0 records out 00:07:37.183 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0359165 s, 58.4 MB/s 00:07:37.183 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:37.443 4096+0 records in 00:07:37.443 4096+0 records out 00:07:37.443 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.233841 s, 9.0 MB/s 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:37.443 128+0 records in 00:07:37.443 128+0 records out 00:07:37.443 65536 bytes (66 kB, 64 KiB) copied, 0.00114786 s, 57.1 MB/s 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:37.443 2035+0 records in 00:07:37.443 2035+0 records out 00:07:37.443 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00882937 s, 118 MB/s 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:37.443 456+0 records in 00:07:37.443 456+0 records out 00:07:37.443 233472 bytes (233 kB, 228 KiB) copied, 0.00355327 s, 65.7 MB/s 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.443 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.703 [2024-11-20 17:42:04.797115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.703 17:42:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60824 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60824 ']' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60824 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60824 00:07:37.961 killing process with pid 60824 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60824' 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60824 00:07:37.961 [2024-11-20 17:42:05.125000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.961 [2024-11-20 17:42:05.125125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.961 [2024-11-20 17:42:05.125184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.961 17:42:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60824 00:07:37.961 [2024-11-20 17:42:05.125198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:38.221 [2024-11-20 17:42:05.348475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.601 ************************************ 00:07:39.601 END TEST raid_function_test_concat 00:07:39.601 ************************************ 00:07:39.602 17:42:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:39.602 00:07:39.602 real 0m4.238s 00:07:39.602 user 0m5.038s 00:07:39.602 sys 0m1.004s 00:07:39.602 17:42:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.602 17:42:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:39.602 17:42:06 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:39.602 17:42:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.602 17:42:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.602 17:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.602 ************************************ 00:07:39.602 START TEST raid0_resize_test 00:07:39.602 ************************************ 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60958 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60958' 00:07:39.602 Process raid pid: 60958 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60958 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60958 ']' 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.602 17:42:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.602 [2024-11-20 17:42:06.713775] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:39.602 [2024-11-20 17:42:06.713906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.861 [2024-11-20 17:42:06.887636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.861 [2024-11-20 17:42:07.006451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.121 [2024-11-20 17:42:07.222189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.121 [2024-11-20 17:42:07.222256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.690 Base_1 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.690 Base_2 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.690 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.690 [2024-11-20 17:42:07.605979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:40.690 [2024-11-20 17:42:07.608089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:40.690 [2024-11-20 17:42:07.608160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:40.690 [2024-11-20 17:42:07.608180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:40.690 [2024-11-20 17:42:07.608513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:40.690 [2024-11-20 17:42:07.608672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:40.690 [2024-11-20 17:42:07.608689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:40.691 [2024-11-20 17:42:07.608897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.691 [2024-11-20 17:42:07.613935] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.691 [2024-11-20 17:42:07.613970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:40.691 true 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:40.691 [2024-11-20 17:42:07.626261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.691 [2024-11-20 17:42:07.677871] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.691 [2024-11-20 17:42:07.677910] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:40.691 [2024-11-20 17:42:07.677947] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:40.691 true 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.691 [2024-11-20 17:42:07.690050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60958 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60958 ']' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60958 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60958 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.691 killing process with pid 60958 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60958' 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60958 00:07:40.691 [2024-11-20 17:42:07.743641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.691 [2024-11-20 17:42:07.743755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.691 17:42:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60958 00:07:40.691 [2024-11-20 17:42:07.743816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.691 [2024-11-20 17:42:07.743826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:40.691 [2024-11-20 17:42:07.762758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.071 17:42:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:42.071 00:07:42.071 real 0m2.349s 00:07:42.071 user 0m2.501s 00:07:42.071 sys 0m0.321s 00:07:42.071 17:42:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.071 17:42:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.071 ************************************ 00:07:42.071 END TEST raid0_resize_test 00:07:42.071 ************************************ 00:07:42.071 17:42:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:42.071 17:42:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.071 17:42:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.071 17:42:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.071 ************************************ 00:07:42.071 START TEST raid1_resize_test 00:07:42.071 ************************************ 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61015 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61015' 00:07:42.071 Process raid pid: 61015 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61015 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61015 ']' 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.071 17:42:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.071 [2024-11-20 17:42:09.126869] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:42.071 [2024-11-20 17:42:09.126983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.329 [2024-11-20 17:42:09.306795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.329 [2024-11-20 17:42:09.433318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.588 [2024-11-20 17:42:09.664712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.588 [2024-11-20 17:42:09.664756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 Base_1 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 Base_2 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 [2024-11-20 17:42:10.055811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:43.158 [2024-11-20 17:42:10.057763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:43.158 [2024-11-20 17:42:10.057831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.158 [2024-11-20 17:42:10.057843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:43.158 [2024-11-20 17:42:10.058146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.158 [2024-11-20 17:42:10.058299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.158 [2024-11-20 17:42:10.058316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:43.158 [2024-11-20 17:42:10.058484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 [2024-11-20 17:42:10.067791] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.158 [2024-11-20 17:42:10.067826] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:43.158 true 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 [2024-11-20 17:42:10.084005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 [2024-11-20 17:42:10.127700] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.158 [2024-11-20 17:42:10.127734] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:43.158 [2024-11-20 17:42:10.127762] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:43.158 true 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 [2024-11-20 17:42:10.143862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61015 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61015 ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 61015 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61015 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.158 killing process with pid 61015 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61015' 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 61015 00:07:43.158 [2024-11-20 17:42:10.213792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.158 [2024-11-20 17:42:10.213930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.158 17:42:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 61015 00:07:43.158 [2024-11-20 17:42:10.214493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.158 [2024-11-20 17:42:10.214524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:43.158 [2024-11-20 17:42:10.234387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.539 17:42:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:44.539 00:07:44.539 real 0m2.410s 00:07:44.539 user 0m2.576s 00:07:44.539 sys 0m0.368s 00:07:44.539 17:42:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.539 17:42:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.539 ************************************ 00:07:44.539 END TEST raid1_resize_test 00:07:44.539 ************************************ 00:07:44.539 17:42:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:44.539 17:42:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:44.539 17:42:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:44.539 17:42:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:44.539 17:42:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.539 17:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.539 ************************************ 00:07:44.539 START TEST raid_state_function_test 00:07:44.539 ************************************ 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:44.539 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61072 00:07:44.540 Process raid pid: 61072 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61072' 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61072 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61072 ']' 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.540 17:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.540 [2024-11-20 17:42:11.620264] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:44.540 [2024-11-20 17:42:11.620415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.799 [2024-11-20 17:42:11.778469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.799 [2024-11-20 17:42:11.910594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.057 [2024-11-20 17:42:12.136005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.057 [2024-11-20 17:42:12.136067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.316 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.316 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.574 [2024-11-20 17:42:12.496659] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.574 [2024-11-20 17:42:12.496723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.574 [2024-11-20 17:42:12.496735] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.574 [2024-11-20 17:42:12.496746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.574 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.575 "name": "Existed_Raid", 00:07:45.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.575 "strip_size_kb": 64, 00:07:45.575 "state": "configuring", 00:07:45.575 "raid_level": "raid0", 00:07:45.575 "superblock": false, 00:07:45.575 "num_base_bdevs": 2, 00:07:45.575 "num_base_bdevs_discovered": 0, 00:07:45.575 "num_base_bdevs_operational": 2, 00:07:45.575 "base_bdevs_list": [ 00:07:45.575 { 00:07:45.575 "name": "BaseBdev1", 00:07:45.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.575 "is_configured": false, 00:07:45.575 "data_offset": 0, 00:07:45.575 "data_size": 0 00:07:45.575 }, 00:07:45.575 { 00:07:45.575 "name": "BaseBdev2", 00:07:45.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.575 "is_configured": false, 00:07:45.575 "data_offset": 0, 00:07:45.575 "data_size": 0 00:07:45.575 } 00:07:45.575 ] 00:07:45.575 }' 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.575 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.834 [2024-11-20 17:42:12.983820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.834 [2024-11-20 17:42:12.983870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.834 [2024-11-20 17:42:12.991815] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.834 [2024-11-20 17:42:12.991867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.834 [2024-11-20 17:42:12.991877] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.834 [2024-11-20 17:42:12.991891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.834 17:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.093 [2024-11-20 17:42:13.040604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.093 BaseBdev1 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.093 [ 00:07:46.093 { 00:07:46.093 "name": "BaseBdev1", 00:07:46.093 "aliases": [ 00:07:46.093 "e03d0ce8-798b-4f06-80a8-40f0daa8d187" 00:07:46.093 ], 00:07:46.093 "product_name": "Malloc disk", 00:07:46.093 "block_size": 512, 00:07:46.093 "num_blocks": 65536, 00:07:46.093 "uuid": "e03d0ce8-798b-4f06-80a8-40f0daa8d187", 00:07:46.093 "assigned_rate_limits": { 00:07:46.093 "rw_ios_per_sec": 0, 00:07:46.093 "rw_mbytes_per_sec": 0, 00:07:46.093 "r_mbytes_per_sec": 0, 00:07:46.093 "w_mbytes_per_sec": 0 00:07:46.093 }, 00:07:46.093 "claimed": true, 00:07:46.093 "claim_type": "exclusive_write", 00:07:46.093 "zoned": false, 00:07:46.093 "supported_io_types": { 00:07:46.093 "read": true, 00:07:46.093 "write": true, 00:07:46.093 "unmap": true, 00:07:46.093 "flush": true, 00:07:46.093 "reset": true, 00:07:46.093 "nvme_admin": false, 00:07:46.093 "nvme_io": false, 00:07:46.093 "nvme_io_md": false, 00:07:46.093 "write_zeroes": true, 00:07:46.093 "zcopy": true, 00:07:46.093 "get_zone_info": false, 00:07:46.093 "zone_management": false, 00:07:46.093 "zone_append": false, 00:07:46.093 "compare": false, 00:07:46.093 "compare_and_write": false, 00:07:46.093 "abort": true, 00:07:46.093 "seek_hole": false, 00:07:46.093 "seek_data": false, 00:07:46.093 "copy": true, 00:07:46.093 "nvme_iov_md": false 00:07:46.093 }, 00:07:46.093 "memory_domains": [ 00:07:46.093 { 00:07:46.093 "dma_device_id": "system", 00:07:46.093 "dma_device_type": 1 00:07:46.093 }, 00:07:46.093 { 00:07:46.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.093 "dma_device_type": 2 00:07:46.093 } 00:07:46.093 ], 00:07:46.093 "driver_specific": {} 00:07:46.093 } 00:07:46.093 ] 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.093 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.094 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.094 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.094 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.094 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.094 "name": "Existed_Raid", 00:07:46.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.094 "strip_size_kb": 64, 00:07:46.094 "state": "configuring", 00:07:46.094 "raid_level": "raid0", 00:07:46.094 "superblock": false, 00:07:46.094 "num_base_bdevs": 2, 00:07:46.094 "num_base_bdevs_discovered": 1, 00:07:46.094 "num_base_bdevs_operational": 2, 00:07:46.094 "base_bdevs_list": [ 00:07:46.094 { 00:07:46.094 "name": "BaseBdev1", 00:07:46.094 "uuid": "e03d0ce8-798b-4f06-80a8-40f0daa8d187", 00:07:46.094 "is_configured": true, 00:07:46.094 "data_offset": 0, 00:07:46.094 "data_size": 65536 00:07:46.094 }, 00:07:46.094 { 00:07:46.094 "name": "BaseBdev2", 00:07:46.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.094 "is_configured": false, 00:07:46.094 "data_offset": 0, 00:07:46.094 "data_size": 0 00:07:46.094 } 00:07:46.094 ] 00:07:46.094 }' 00:07:46.094 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.094 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.352 [2024-11-20 17:42:13.519918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.352 [2024-11-20 17:42:13.519995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.352 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.611 [2024-11-20 17:42:13.531959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.611 [2024-11-20 17:42:13.534010] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.611 [2024-11-20 17:42:13.534070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.611 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.611 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.612 "name": "Existed_Raid", 00:07:46.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.612 "strip_size_kb": 64, 00:07:46.612 "state": "configuring", 00:07:46.612 "raid_level": "raid0", 00:07:46.612 "superblock": false, 00:07:46.612 "num_base_bdevs": 2, 00:07:46.612 "num_base_bdevs_discovered": 1, 00:07:46.612 "num_base_bdevs_operational": 2, 00:07:46.612 "base_bdevs_list": [ 00:07:46.612 { 00:07:46.612 "name": "BaseBdev1", 00:07:46.612 "uuid": "e03d0ce8-798b-4f06-80a8-40f0daa8d187", 00:07:46.612 "is_configured": true, 00:07:46.612 "data_offset": 0, 00:07:46.612 "data_size": 65536 00:07:46.612 }, 00:07:46.612 { 00:07:46.612 "name": "BaseBdev2", 00:07:46.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.612 "is_configured": false, 00:07:46.612 "data_offset": 0, 00:07:46.612 "data_size": 0 00:07:46.612 } 00:07:46.612 ] 00:07:46.612 }' 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.612 17:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.871 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.871 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.871 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.130 [2024-11-20 17:42:14.056142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.130 [2024-11-20 17:42:14.056188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.130 [2024-11-20 17:42:14.056198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.130 [2024-11-20 17:42:14.056524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.130 [2024-11-20 17:42:14.056729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.130 [2024-11-20 17:42:14.056753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:47.130 [2024-11-20 17:42:14.057062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.130 BaseBdev2 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.130 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.130 [ 00:07:47.130 { 00:07:47.130 "name": "BaseBdev2", 00:07:47.130 "aliases": [ 00:07:47.130 "49f80bb5-8cf8-44bd-804c-ee80b073bad0" 00:07:47.130 ], 00:07:47.130 "product_name": "Malloc disk", 00:07:47.130 "block_size": 512, 00:07:47.130 "num_blocks": 65536, 00:07:47.130 "uuid": "49f80bb5-8cf8-44bd-804c-ee80b073bad0", 00:07:47.130 "assigned_rate_limits": { 00:07:47.130 "rw_ios_per_sec": 0, 00:07:47.130 "rw_mbytes_per_sec": 0, 00:07:47.130 "r_mbytes_per_sec": 0, 00:07:47.130 "w_mbytes_per_sec": 0 00:07:47.130 }, 00:07:47.130 "claimed": true, 00:07:47.130 "claim_type": "exclusive_write", 00:07:47.130 "zoned": false, 00:07:47.130 "supported_io_types": { 00:07:47.130 "read": true, 00:07:47.130 "write": true, 00:07:47.130 "unmap": true, 00:07:47.130 "flush": true, 00:07:47.130 "reset": true, 00:07:47.130 "nvme_admin": false, 00:07:47.130 "nvme_io": false, 00:07:47.130 "nvme_io_md": false, 00:07:47.130 "write_zeroes": true, 00:07:47.130 "zcopy": true, 00:07:47.130 "get_zone_info": false, 00:07:47.130 "zone_management": false, 00:07:47.130 "zone_append": false, 00:07:47.130 "compare": false, 00:07:47.130 "compare_and_write": false, 00:07:47.130 "abort": true, 00:07:47.130 "seek_hole": false, 00:07:47.130 "seek_data": false, 00:07:47.130 "copy": true, 00:07:47.130 "nvme_iov_md": false 00:07:47.130 }, 00:07:47.130 "memory_domains": [ 00:07:47.130 { 00:07:47.130 "dma_device_id": "system", 00:07:47.130 "dma_device_type": 1 00:07:47.130 }, 00:07:47.130 { 00:07:47.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.130 "dma_device_type": 2 00:07:47.130 } 00:07:47.130 ], 00:07:47.130 "driver_specific": {} 00:07:47.130 } 00:07:47.130 ] 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.131 "name": "Existed_Raid", 00:07:47.131 "uuid": "bd5eb06d-1266-4c3c-b3a3-1be6d2985781", 00:07:47.131 "strip_size_kb": 64, 00:07:47.131 "state": "online", 00:07:47.131 "raid_level": "raid0", 00:07:47.131 "superblock": false, 00:07:47.131 "num_base_bdevs": 2, 00:07:47.131 "num_base_bdevs_discovered": 2, 00:07:47.131 "num_base_bdevs_operational": 2, 00:07:47.131 "base_bdevs_list": [ 00:07:47.131 { 00:07:47.131 "name": "BaseBdev1", 00:07:47.131 "uuid": "e03d0ce8-798b-4f06-80a8-40f0daa8d187", 00:07:47.131 "is_configured": true, 00:07:47.131 "data_offset": 0, 00:07:47.131 "data_size": 65536 00:07:47.131 }, 00:07:47.131 { 00:07:47.131 "name": "BaseBdev2", 00:07:47.131 "uuid": "49f80bb5-8cf8-44bd-804c-ee80b073bad0", 00:07:47.131 "is_configured": true, 00:07:47.131 "data_offset": 0, 00:07:47.131 "data_size": 65536 00:07:47.131 } 00:07:47.131 ] 00:07:47.131 }' 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.131 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.700 [2024-11-20 17:42:14.583623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.700 "name": "Existed_Raid", 00:07:47.700 "aliases": [ 00:07:47.700 "bd5eb06d-1266-4c3c-b3a3-1be6d2985781" 00:07:47.700 ], 00:07:47.700 "product_name": "Raid Volume", 00:07:47.700 "block_size": 512, 00:07:47.700 "num_blocks": 131072, 00:07:47.700 "uuid": "bd5eb06d-1266-4c3c-b3a3-1be6d2985781", 00:07:47.700 "assigned_rate_limits": { 00:07:47.700 "rw_ios_per_sec": 0, 00:07:47.700 "rw_mbytes_per_sec": 0, 00:07:47.700 "r_mbytes_per_sec": 0, 00:07:47.700 "w_mbytes_per_sec": 0 00:07:47.700 }, 00:07:47.700 "claimed": false, 00:07:47.700 "zoned": false, 00:07:47.700 "supported_io_types": { 00:07:47.700 "read": true, 00:07:47.700 "write": true, 00:07:47.700 "unmap": true, 00:07:47.700 "flush": true, 00:07:47.700 "reset": true, 00:07:47.700 "nvme_admin": false, 00:07:47.700 "nvme_io": false, 00:07:47.700 "nvme_io_md": false, 00:07:47.700 "write_zeroes": true, 00:07:47.700 "zcopy": false, 00:07:47.700 "get_zone_info": false, 00:07:47.700 "zone_management": false, 00:07:47.700 "zone_append": false, 00:07:47.700 "compare": false, 00:07:47.700 "compare_and_write": false, 00:07:47.700 "abort": false, 00:07:47.700 "seek_hole": false, 00:07:47.700 "seek_data": false, 00:07:47.700 "copy": false, 00:07:47.700 "nvme_iov_md": false 00:07:47.700 }, 00:07:47.700 "memory_domains": [ 00:07:47.700 { 00:07:47.700 "dma_device_id": "system", 00:07:47.700 "dma_device_type": 1 00:07:47.700 }, 00:07:47.700 { 00:07:47.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.700 "dma_device_type": 2 00:07:47.700 }, 00:07:47.700 { 00:07:47.700 "dma_device_id": "system", 00:07:47.700 "dma_device_type": 1 00:07:47.700 }, 00:07:47.700 { 00:07:47.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.700 "dma_device_type": 2 00:07:47.700 } 00:07:47.700 ], 00:07:47.700 "driver_specific": { 00:07:47.700 "raid": { 00:07:47.700 "uuid": "bd5eb06d-1266-4c3c-b3a3-1be6d2985781", 00:07:47.700 "strip_size_kb": 64, 00:07:47.700 "state": "online", 00:07:47.700 "raid_level": "raid0", 00:07:47.700 "superblock": false, 00:07:47.700 "num_base_bdevs": 2, 00:07:47.700 "num_base_bdevs_discovered": 2, 00:07:47.700 "num_base_bdevs_operational": 2, 00:07:47.700 "base_bdevs_list": [ 00:07:47.700 { 00:07:47.700 "name": "BaseBdev1", 00:07:47.700 "uuid": "e03d0ce8-798b-4f06-80a8-40f0daa8d187", 00:07:47.700 "is_configured": true, 00:07:47.700 "data_offset": 0, 00:07:47.700 "data_size": 65536 00:07:47.700 }, 00:07:47.700 { 00:07:47.700 "name": "BaseBdev2", 00:07:47.700 "uuid": "49f80bb5-8cf8-44bd-804c-ee80b073bad0", 00:07:47.700 "is_configured": true, 00:07:47.700 "data_offset": 0, 00:07:47.700 "data_size": 65536 00:07:47.700 } 00:07:47.700 ] 00:07:47.700 } 00:07:47.700 } 00:07:47.700 }' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.700 BaseBdev2' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.700 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.700 [2024-11-20 17:42:14.830953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.700 [2024-11-20 17:42:14.830995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.700 [2024-11-20 17:42:14.831065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.963 "name": "Existed_Raid", 00:07:47.963 "uuid": "bd5eb06d-1266-4c3c-b3a3-1be6d2985781", 00:07:47.963 "strip_size_kb": 64, 00:07:47.963 "state": "offline", 00:07:47.963 "raid_level": "raid0", 00:07:47.963 "superblock": false, 00:07:47.963 "num_base_bdevs": 2, 00:07:47.963 "num_base_bdevs_discovered": 1, 00:07:47.963 "num_base_bdevs_operational": 1, 00:07:47.963 "base_bdevs_list": [ 00:07:47.963 { 00:07:47.963 "name": null, 00:07:47.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.963 "is_configured": false, 00:07:47.963 "data_offset": 0, 00:07:47.963 "data_size": 65536 00:07:47.963 }, 00:07:47.963 { 00:07:47.963 "name": "BaseBdev2", 00:07:47.963 "uuid": "49f80bb5-8cf8-44bd-804c-ee80b073bad0", 00:07:47.963 "is_configured": true, 00:07:47.963 "data_offset": 0, 00:07:47.963 "data_size": 65536 00:07:47.963 } 00:07:47.963 ] 00:07:47.963 }' 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.963 17:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.550 [2024-11-20 17:42:15.474674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.550 [2024-11-20 17:42:15.474740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61072 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61072 ']' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61072 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61072 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.550 killing process with pid 61072 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61072' 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61072 00:07:48.550 [2024-11-20 17:42:15.684037] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.550 17:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61072 00:07:48.550 [2024-11-20 17:42:15.703765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.927 17:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.927 00:07:49.927 real 0m5.446s 00:07:49.927 user 0m7.912s 00:07:49.927 sys 0m0.841s 00:07:49.927 17:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.927 ************************************ 00:07:49.927 END TEST raid_state_function_test 00:07:49.927 ************************************ 00:07:49.927 17:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.927 17:42:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:49.927 17:42:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.927 17:42:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.927 17:42:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.927 ************************************ 00:07:49.927 START TEST raid_state_function_test_sb 00:07:49.927 ************************************ 00:07:49.927 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:49.927 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:49.927 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.927 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61331 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61331' 00:07:49.928 Process raid pid: 61331 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61331 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61331 ']' 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.928 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.187 [2024-11-20 17:42:17.128875] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:50.187 [2024-11-20 17:42:17.129025] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.187 [2024-11-20 17:42:17.308846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.446 [2024-11-20 17:42:17.452770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.705 [2024-11-20 17:42:17.699362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.705 [2024-11-20 17:42:17.699427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.964 [2024-11-20 17:42:17.994152] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.964 [2024-11-20 17:42:17.994223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.964 [2024-11-20 17:42:17.994234] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.964 [2024-11-20 17:42:17.994245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.964 17:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.964 "name": "Existed_Raid", 00:07:50.964 "uuid": "767837b8-5d6b-4967-8c4d-f96b0bcf2458", 00:07:50.964 "strip_size_kb": 64, 00:07:50.964 "state": "configuring", 00:07:50.964 "raid_level": "raid0", 00:07:50.964 "superblock": true, 00:07:50.964 "num_base_bdevs": 2, 00:07:50.964 "num_base_bdevs_discovered": 0, 00:07:50.964 "num_base_bdevs_operational": 2, 00:07:50.964 "base_bdevs_list": [ 00:07:50.964 { 00:07:50.964 "name": "BaseBdev1", 00:07:50.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.964 "is_configured": false, 00:07:50.964 "data_offset": 0, 00:07:50.964 "data_size": 0 00:07:50.964 }, 00:07:50.964 { 00:07:50.964 "name": "BaseBdev2", 00:07:50.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.964 "is_configured": false, 00:07:50.964 "data_offset": 0, 00:07:50.964 "data_size": 0 00:07:50.964 } 00:07:50.964 ] 00:07:50.964 }' 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.964 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 [2024-11-20 17:42:18.405381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.533 [2024-11-20 17:42:18.405443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 [2024-11-20 17:42:18.417323] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.533 [2024-11-20 17:42:18.417375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.533 [2024-11-20 17:42:18.417384] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.533 [2024-11-20 17:42:18.417399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 [2024-11-20 17:42:18.475370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.533 BaseBdev1 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 [ 00:07:51.533 { 00:07:51.533 "name": "BaseBdev1", 00:07:51.533 "aliases": [ 00:07:51.533 "8581d64b-9a1f-4709-a89e-7d1532241cb2" 00:07:51.533 ], 00:07:51.533 "product_name": "Malloc disk", 00:07:51.533 "block_size": 512, 00:07:51.533 "num_blocks": 65536, 00:07:51.533 "uuid": "8581d64b-9a1f-4709-a89e-7d1532241cb2", 00:07:51.533 "assigned_rate_limits": { 00:07:51.533 "rw_ios_per_sec": 0, 00:07:51.533 "rw_mbytes_per_sec": 0, 00:07:51.533 "r_mbytes_per_sec": 0, 00:07:51.533 "w_mbytes_per_sec": 0 00:07:51.533 }, 00:07:51.533 "claimed": true, 00:07:51.533 "claim_type": "exclusive_write", 00:07:51.533 "zoned": false, 00:07:51.533 "supported_io_types": { 00:07:51.533 "read": true, 00:07:51.533 "write": true, 00:07:51.533 "unmap": true, 00:07:51.533 "flush": true, 00:07:51.533 "reset": true, 00:07:51.533 "nvme_admin": false, 00:07:51.533 "nvme_io": false, 00:07:51.533 "nvme_io_md": false, 00:07:51.533 "write_zeroes": true, 00:07:51.533 "zcopy": true, 00:07:51.533 "get_zone_info": false, 00:07:51.533 "zone_management": false, 00:07:51.533 "zone_append": false, 00:07:51.533 "compare": false, 00:07:51.533 "compare_and_write": false, 00:07:51.533 "abort": true, 00:07:51.533 "seek_hole": false, 00:07:51.533 "seek_data": false, 00:07:51.533 "copy": true, 00:07:51.533 "nvme_iov_md": false 00:07:51.533 }, 00:07:51.533 "memory_domains": [ 00:07:51.533 { 00:07:51.533 "dma_device_id": "system", 00:07:51.533 "dma_device_type": 1 00:07:51.533 }, 00:07:51.533 { 00:07:51.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.533 "dma_device_type": 2 00:07:51.533 } 00:07:51.533 ], 00:07:51.533 "driver_specific": {} 00:07:51.533 } 00:07:51.533 ] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.533 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.533 "name": "Existed_Raid", 00:07:51.533 "uuid": "27569fa3-761e-4d45-8adf-515a51504060", 00:07:51.533 "strip_size_kb": 64, 00:07:51.533 "state": "configuring", 00:07:51.533 "raid_level": "raid0", 00:07:51.533 "superblock": true, 00:07:51.533 "num_base_bdevs": 2, 00:07:51.533 "num_base_bdevs_discovered": 1, 00:07:51.533 "num_base_bdevs_operational": 2, 00:07:51.533 "base_bdevs_list": [ 00:07:51.533 { 00:07:51.533 "name": "BaseBdev1", 00:07:51.533 "uuid": "8581d64b-9a1f-4709-a89e-7d1532241cb2", 00:07:51.533 "is_configured": true, 00:07:51.533 "data_offset": 2048, 00:07:51.533 "data_size": 63488 00:07:51.533 }, 00:07:51.533 { 00:07:51.533 "name": "BaseBdev2", 00:07:51.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.533 "is_configured": false, 00:07:51.533 "data_offset": 0, 00:07:51.533 "data_size": 0 00:07:51.534 } 00:07:51.534 ] 00:07:51.534 }' 00:07:51.534 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.534 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.897 [2024-11-20 17:42:18.970629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.897 [2024-11-20 17:42:18.970714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.897 [2024-11-20 17:42:18.982647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.897 [2024-11-20 17:42:18.984821] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.897 [2024-11-20 17:42:18.984868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.897 17:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.897 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.897 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.897 "name": "Existed_Raid", 00:07:51.897 "uuid": "a4e764a2-b897-489c-bdb6-3bc923d0a801", 00:07:51.897 "strip_size_kb": 64, 00:07:51.897 "state": "configuring", 00:07:51.897 "raid_level": "raid0", 00:07:51.897 "superblock": true, 00:07:51.897 "num_base_bdevs": 2, 00:07:51.897 "num_base_bdevs_discovered": 1, 00:07:51.897 "num_base_bdevs_operational": 2, 00:07:51.897 "base_bdevs_list": [ 00:07:51.897 { 00:07:51.897 "name": "BaseBdev1", 00:07:51.897 "uuid": "8581d64b-9a1f-4709-a89e-7d1532241cb2", 00:07:51.897 "is_configured": true, 00:07:51.897 "data_offset": 2048, 00:07:51.897 "data_size": 63488 00:07:51.897 }, 00:07:51.897 { 00:07:51.897 "name": "BaseBdev2", 00:07:51.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.897 "is_configured": false, 00:07:51.897 "data_offset": 0, 00:07:51.897 "data_size": 0 00:07:51.897 } 00:07:51.897 ] 00:07:51.897 }' 00:07:51.897 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.897 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.468 [2024-11-20 17:42:19.501825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.468 [2024-11-20 17:42:19.502133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.468 [2024-11-20 17:42:19.502149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.468 [2024-11-20 17:42:19.502434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.468 [2024-11-20 17:42:19.502620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.468 [2024-11-20 17:42:19.502634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.468 [2024-11-20 17:42:19.502787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.468 BaseBdev2 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.468 [ 00:07:52.468 { 00:07:52.468 "name": "BaseBdev2", 00:07:52.468 "aliases": [ 00:07:52.468 "32151e85-81a2-4609-8ec9-684f951ed955" 00:07:52.468 ], 00:07:52.468 "product_name": "Malloc disk", 00:07:52.468 "block_size": 512, 00:07:52.468 "num_blocks": 65536, 00:07:52.468 "uuid": "32151e85-81a2-4609-8ec9-684f951ed955", 00:07:52.468 "assigned_rate_limits": { 00:07:52.468 "rw_ios_per_sec": 0, 00:07:52.468 "rw_mbytes_per_sec": 0, 00:07:52.468 "r_mbytes_per_sec": 0, 00:07:52.468 "w_mbytes_per_sec": 0 00:07:52.468 }, 00:07:52.468 "claimed": true, 00:07:52.468 "claim_type": "exclusive_write", 00:07:52.468 "zoned": false, 00:07:52.468 "supported_io_types": { 00:07:52.468 "read": true, 00:07:52.468 "write": true, 00:07:52.468 "unmap": true, 00:07:52.468 "flush": true, 00:07:52.468 "reset": true, 00:07:52.468 "nvme_admin": false, 00:07:52.468 "nvme_io": false, 00:07:52.468 "nvme_io_md": false, 00:07:52.468 "write_zeroes": true, 00:07:52.468 "zcopy": true, 00:07:52.468 "get_zone_info": false, 00:07:52.468 "zone_management": false, 00:07:52.468 "zone_append": false, 00:07:52.468 "compare": false, 00:07:52.468 "compare_and_write": false, 00:07:52.468 "abort": true, 00:07:52.468 "seek_hole": false, 00:07:52.468 "seek_data": false, 00:07:52.468 "copy": true, 00:07:52.468 "nvme_iov_md": false 00:07:52.468 }, 00:07:52.468 "memory_domains": [ 00:07:52.468 { 00:07:52.468 "dma_device_id": "system", 00:07:52.468 "dma_device_type": 1 00:07:52.468 }, 00:07:52.468 { 00:07:52.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.468 "dma_device_type": 2 00:07:52.468 } 00:07:52.468 ], 00:07:52.468 "driver_specific": {} 00:07:52.468 } 00:07:52.468 ] 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.468 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.469 "name": "Existed_Raid", 00:07:52.469 "uuid": "a4e764a2-b897-489c-bdb6-3bc923d0a801", 00:07:52.469 "strip_size_kb": 64, 00:07:52.469 "state": "online", 00:07:52.469 "raid_level": "raid0", 00:07:52.469 "superblock": true, 00:07:52.469 "num_base_bdevs": 2, 00:07:52.469 "num_base_bdevs_discovered": 2, 00:07:52.469 "num_base_bdevs_operational": 2, 00:07:52.469 "base_bdevs_list": [ 00:07:52.469 { 00:07:52.469 "name": "BaseBdev1", 00:07:52.469 "uuid": "8581d64b-9a1f-4709-a89e-7d1532241cb2", 00:07:52.469 "is_configured": true, 00:07:52.469 "data_offset": 2048, 00:07:52.469 "data_size": 63488 00:07:52.469 }, 00:07:52.469 { 00:07:52.469 "name": "BaseBdev2", 00:07:52.469 "uuid": "32151e85-81a2-4609-8ec9-684f951ed955", 00:07:52.469 "is_configured": true, 00:07:52.469 "data_offset": 2048, 00:07:52.469 "data_size": 63488 00:07:52.469 } 00:07:52.469 ] 00:07:52.469 }' 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.469 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.039 17:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.039 [2024-11-20 17:42:19.993411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.039 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.039 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.039 "name": "Existed_Raid", 00:07:53.039 "aliases": [ 00:07:53.039 "a4e764a2-b897-489c-bdb6-3bc923d0a801" 00:07:53.039 ], 00:07:53.039 "product_name": "Raid Volume", 00:07:53.039 "block_size": 512, 00:07:53.039 "num_blocks": 126976, 00:07:53.039 "uuid": "a4e764a2-b897-489c-bdb6-3bc923d0a801", 00:07:53.039 "assigned_rate_limits": { 00:07:53.039 "rw_ios_per_sec": 0, 00:07:53.039 "rw_mbytes_per_sec": 0, 00:07:53.039 "r_mbytes_per_sec": 0, 00:07:53.039 "w_mbytes_per_sec": 0 00:07:53.039 }, 00:07:53.039 "claimed": false, 00:07:53.039 "zoned": false, 00:07:53.039 "supported_io_types": { 00:07:53.039 "read": true, 00:07:53.039 "write": true, 00:07:53.039 "unmap": true, 00:07:53.039 "flush": true, 00:07:53.039 "reset": true, 00:07:53.039 "nvme_admin": false, 00:07:53.039 "nvme_io": false, 00:07:53.039 "nvme_io_md": false, 00:07:53.039 "write_zeroes": true, 00:07:53.039 "zcopy": false, 00:07:53.039 "get_zone_info": false, 00:07:53.039 "zone_management": false, 00:07:53.039 "zone_append": false, 00:07:53.039 "compare": false, 00:07:53.039 "compare_and_write": false, 00:07:53.039 "abort": false, 00:07:53.039 "seek_hole": false, 00:07:53.039 "seek_data": false, 00:07:53.039 "copy": false, 00:07:53.039 "nvme_iov_md": false 00:07:53.039 }, 00:07:53.039 "memory_domains": [ 00:07:53.039 { 00:07:53.039 "dma_device_id": "system", 00:07:53.039 "dma_device_type": 1 00:07:53.039 }, 00:07:53.039 { 00:07:53.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.039 "dma_device_type": 2 00:07:53.039 }, 00:07:53.039 { 00:07:53.039 "dma_device_id": "system", 00:07:53.039 "dma_device_type": 1 00:07:53.039 }, 00:07:53.039 { 00:07:53.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.039 "dma_device_type": 2 00:07:53.039 } 00:07:53.039 ], 00:07:53.039 "driver_specific": { 00:07:53.039 "raid": { 00:07:53.039 "uuid": "a4e764a2-b897-489c-bdb6-3bc923d0a801", 00:07:53.039 "strip_size_kb": 64, 00:07:53.039 "state": "online", 00:07:53.039 "raid_level": "raid0", 00:07:53.039 "superblock": true, 00:07:53.039 "num_base_bdevs": 2, 00:07:53.039 "num_base_bdevs_discovered": 2, 00:07:53.039 "num_base_bdevs_operational": 2, 00:07:53.039 "base_bdevs_list": [ 00:07:53.039 { 00:07:53.039 "name": "BaseBdev1", 00:07:53.039 "uuid": "8581d64b-9a1f-4709-a89e-7d1532241cb2", 00:07:53.039 "is_configured": true, 00:07:53.039 "data_offset": 2048, 00:07:53.039 "data_size": 63488 00:07:53.039 }, 00:07:53.039 { 00:07:53.039 "name": "BaseBdev2", 00:07:53.039 "uuid": "32151e85-81a2-4609-8ec9-684f951ed955", 00:07:53.039 "is_configured": true, 00:07:53.039 "data_offset": 2048, 00:07:53.039 "data_size": 63488 00:07:53.039 } 00:07:53.039 ] 00:07:53.039 } 00:07:53.039 } 00:07:53.039 }' 00:07:53.039 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.039 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.039 BaseBdev2' 00:07:53.039 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.039 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.040 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.299 [2024-11-20 17:42:20.212824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.299 [2024-11-20 17:42:20.212898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.299 [2024-11-20 17:42:20.212983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.299 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.299 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.299 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.300 "name": "Existed_Raid", 00:07:53.300 "uuid": "a4e764a2-b897-489c-bdb6-3bc923d0a801", 00:07:53.300 "strip_size_kb": 64, 00:07:53.300 "state": "offline", 00:07:53.300 "raid_level": "raid0", 00:07:53.300 "superblock": true, 00:07:53.300 "num_base_bdevs": 2, 00:07:53.300 "num_base_bdevs_discovered": 1, 00:07:53.300 "num_base_bdevs_operational": 1, 00:07:53.300 "base_bdevs_list": [ 00:07:53.300 { 00:07:53.300 "name": null, 00:07:53.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.300 "is_configured": false, 00:07:53.300 "data_offset": 0, 00:07:53.300 "data_size": 63488 00:07:53.300 }, 00:07:53.300 { 00:07:53.300 "name": "BaseBdev2", 00:07:53.300 "uuid": "32151e85-81a2-4609-8ec9-684f951ed955", 00:07:53.300 "is_configured": true, 00:07:53.300 "data_offset": 2048, 00:07:53.300 "data_size": 63488 00:07:53.300 } 00:07:53.300 ] 00:07:53.300 }' 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.300 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.869 [2024-11-20 17:42:20.832631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.869 [2024-11-20 17:42:20.832822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61331 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61331 ']' 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61331 00:07:53.869 17:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61331 00:07:53.869 killing process with pid 61331 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61331' 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61331 00:07:53.869 [2024-11-20 17:42:21.036297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.869 17:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61331 00:07:54.129 [2024-11-20 17:42:21.054211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.511 17:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.511 00:07:55.512 real 0m5.251s 00:07:55.512 user 0m7.429s 00:07:55.512 sys 0m0.925s 00:07:55.512 17:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.512 ************************************ 00:07:55.512 END TEST raid_state_function_test_sb 00:07:55.512 ************************************ 00:07:55.512 17:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.512 17:42:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:55.512 17:42:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:55.512 17:42:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.512 17:42:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.512 ************************************ 00:07:55.512 START TEST raid_superblock_test 00:07:55.512 ************************************ 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61583 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61583 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61583 ']' 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.512 17:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.512 [2024-11-20 17:42:22.434991] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:55.512 [2024-11-20 17:42:22.435192] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:07:55.512 [2024-11-20 17:42:22.612344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.772 [2024-11-20 17:42:22.752047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.032 [2024-11-20 17:42:22.992211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.032 [2024-11-20 17:42:22.992409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.292 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.292 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.292 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:56.292 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.293 malloc1 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.293 [2024-11-20 17:42:23.357230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.293 [2024-11-20 17:42:23.357406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.293 [2024-11-20 17:42:23.357454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.293 [2024-11-20 17:42:23.357498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.293 [2024-11-20 17:42:23.360188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.293 [2024-11-20 17:42:23.360270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.293 pt1 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.293 malloc2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.293 [2024-11-20 17:42:23.422651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.293 [2024-11-20 17:42:23.422741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.293 [2024-11-20 17:42:23.422775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:56.293 [2024-11-20 17:42:23.422785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.293 [2024-11-20 17:42:23.425343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.293 [2024-11-20 17:42:23.425383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.293 pt2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.293 [2024-11-20 17:42:23.434682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.293 [2024-11-20 17:42:23.436867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.293 [2024-11-20 17:42:23.437056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.293 [2024-11-20 17:42:23.437071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.293 [2024-11-20 17:42:23.437351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.293 [2024-11-20 17:42:23.437521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.293 [2024-11-20 17:42:23.437534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:56.293 [2024-11-20 17:42:23.437710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.293 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.553 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.553 "name": "raid_bdev1", 00:07:56.553 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:56.553 "strip_size_kb": 64, 00:07:56.553 "state": "online", 00:07:56.553 "raid_level": "raid0", 00:07:56.553 "superblock": true, 00:07:56.553 "num_base_bdevs": 2, 00:07:56.553 "num_base_bdevs_discovered": 2, 00:07:56.553 "num_base_bdevs_operational": 2, 00:07:56.553 "base_bdevs_list": [ 00:07:56.553 { 00:07:56.553 "name": "pt1", 00:07:56.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.553 "is_configured": true, 00:07:56.553 "data_offset": 2048, 00:07:56.553 "data_size": 63488 00:07:56.553 }, 00:07:56.553 { 00:07:56.553 "name": "pt2", 00:07:56.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.553 "is_configured": true, 00:07:56.553 "data_offset": 2048, 00:07:56.553 "data_size": 63488 00:07:56.553 } 00:07:56.553 ] 00:07:56.553 }' 00:07:56.553 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.553 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.812 [2024-11-20 17:42:23.934192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.812 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.812 "name": "raid_bdev1", 00:07:56.812 "aliases": [ 00:07:56.812 "245630cc-885b-447c-b612-05bc0247178f" 00:07:56.812 ], 00:07:56.812 "product_name": "Raid Volume", 00:07:56.812 "block_size": 512, 00:07:56.812 "num_blocks": 126976, 00:07:56.813 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:56.813 "assigned_rate_limits": { 00:07:56.813 "rw_ios_per_sec": 0, 00:07:56.813 "rw_mbytes_per_sec": 0, 00:07:56.813 "r_mbytes_per_sec": 0, 00:07:56.813 "w_mbytes_per_sec": 0 00:07:56.813 }, 00:07:56.813 "claimed": false, 00:07:56.813 "zoned": false, 00:07:56.813 "supported_io_types": { 00:07:56.813 "read": true, 00:07:56.813 "write": true, 00:07:56.813 "unmap": true, 00:07:56.813 "flush": true, 00:07:56.813 "reset": true, 00:07:56.813 "nvme_admin": false, 00:07:56.813 "nvme_io": false, 00:07:56.813 "nvme_io_md": false, 00:07:56.813 "write_zeroes": true, 00:07:56.813 "zcopy": false, 00:07:56.813 "get_zone_info": false, 00:07:56.813 "zone_management": false, 00:07:56.813 "zone_append": false, 00:07:56.813 "compare": false, 00:07:56.813 "compare_and_write": false, 00:07:56.813 "abort": false, 00:07:56.813 "seek_hole": false, 00:07:56.813 "seek_data": false, 00:07:56.813 "copy": false, 00:07:56.813 "nvme_iov_md": false 00:07:56.813 }, 00:07:56.813 "memory_domains": [ 00:07:56.813 { 00:07:56.813 "dma_device_id": "system", 00:07:56.813 "dma_device_type": 1 00:07:56.813 }, 00:07:56.813 { 00:07:56.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.813 "dma_device_type": 2 00:07:56.813 }, 00:07:56.813 { 00:07:56.813 "dma_device_id": "system", 00:07:56.813 "dma_device_type": 1 00:07:56.813 }, 00:07:56.813 { 00:07:56.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.813 "dma_device_type": 2 00:07:56.813 } 00:07:56.813 ], 00:07:56.813 "driver_specific": { 00:07:56.813 "raid": { 00:07:56.813 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:56.813 "strip_size_kb": 64, 00:07:56.813 "state": "online", 00:07:56.813 "raid_level": "raid0", 00:07:56.813 "superblock": true, 00:07:56.813 "num_base_bdevs": 2, 00:07:56.813 "num_base_bdevs_discovered": 2, 00:07:56.813 "num_base_bdevs_operational": 2, 00:07:56.813 "base_bdevs_list": [ 00:07:56.813 { 00:07:56.813 "name": "pt1", 00:07:56.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.813 "is_configured": true, 00:07:56.813 "data_offset": 2048, 00:07:56.813 "data_size": 63488 00:07:56.813 }, 00:07:56.813 { 00:07:56.813 "name": "pt2", 00:07:56.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.813 "is_configured": true, 00:07:56.813 "data_offset": 2048, 00:07:56.813 "data_size": 63488 00:07:56.813 } 00:07:56.813 ] 00:07:56.813 } 00:07:56.813 } 00:07:56.813 }' 00:07:56.813 17:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.072 pt2' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 [2024-11-20 17:42:24.181570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=245630cc-885b-447c-b612-05bc0247178f 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 245630cc-885b-447c-b612-05bc0247178f ']' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 [2024-11-20 17:42:24.225232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.072 [2024-11-20 17:42:24.225270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.072 [2024-11-20 17:42:24.225364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.072 [2024-11-20 17:42:24.225421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.072 [2024-11-20 17:42:24.225434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:57.072 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.385 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 [2024-11-20 17:42:24.353091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.386 [2024-11-20 17:42:24.355307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.386 [2024-11-20 17:42:24.355385] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.386 [2024-11-20 17:42:24.355438] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.386 [2024-11-20 17:42:24.355453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.386 [2024-11-20 17:42:24.355466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:57.386 request: 00:07:57.386 { 00:07:57.386 "name": "raid_bdev1", 00:07:57.386 "raid_level": "raid0", 00:07:57.386 "base_bdevs": [ 00:07:57.386 "malloc1", 00:07:57.386 "malloc2" 00:07:57.386 ], 00:07:57.386 "strip_size_kb": 64, 00:07:57.386 "superblock": false, 00:07:57.386 "method": "bdev_raid_create", 00:07:57.386 "req_id": 1 00:07:57.386 } 00:07:57.386 Got JSON-RPC error response 00:07:57.386 response: 00:07:57.386 { 00:07:57.386 "code": -17, 00:07:57.386 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.386 } 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 [2024-11-20 17:42:24.413043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.386 [2024-11-20 17:42:24.413147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.386 [2024-11-20 17:42:24.413170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:57.386 [2024-11-20 17:42:24.413182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.386 [2024-11-20 17:42:24.415797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.386 [2024-11-20 17:42:24.415931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.386 [2024-11-20 17:42:24.416061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.386 [2024-11-20 17:42:24.416129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.386 pt1 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.386 "name": "raid_bdev1", 00:07:57.386 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:57.386 "strip_size_kb": 64, 00:07:57.386 "state": "configuring", 00:07:57.386 "raid_level": "raid0", 00:07:57.386 "superblock": true, 00:07:57.386 "num_base_bdevs": 2, 00:07:57.386 "num_base_bdevs_discovered": 1, 00:07:57.386 "num_base_bdevs_operational": 2, 00:07:57.386 "base_bdevs_list": [ 00:07:57.386 { 00:07:57.386 "name": "pt1", 00:07:57.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.386 "is_configured": true, 00:07:57.386 "data_offset": 2048, 00:07:57.386 "data_size": 63488 00:07:57.386 }, 00:07:57.386 { 00:07:57.386 "name": null, 00:07:57.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.386 "is_configured": false, 00:07:57.386 "data_offset": 2048, 00:07:57.386 "data_size": 63488 00:07:57.386 } 00:07:57.386 ] 00:07:57.386 }' 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.386 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.958 [2024-11-20 17:42:24.900206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.958 [2024-11-20 17:42:24.900416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.958 [2024-11-20 17:42:24.900472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:57.958 [2024-11-20 17:42:24.900515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.958 [2024-11-20 17:42:24.901127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.958 [2024-11-20 17:42:24.901199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.958 [2024-11-20 17:42:24.901336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.958 [2024-11-20 17:42:24.901399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.958 [2024-11-20 17:42:24.901561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.958 [2024-11-20 17:42:24.901602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.958 [2024-11-20 17:42:24.901904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:57.958 [2024-11-20 17:42:24.902096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.958 [2024-11-20 17:42:24.902135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:57.958 [2024-11-20 17:42:24.902321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.958 pt2 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.958 "name": "raid_bdev1", 00:07:57.958 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:57.958 "strip_size_kb": 64, 00:07:57.958 "state": "online", 00:07:57.958 "raid_level": "raid0", 00:07:57.958 "superblock": true, 00:07:57.958 "num_base_bdevs": 2, 00:07:57.958 "num_base_bdevs_discovered": 2, 00:07:57.958 "num_base_bdevs_operational": 2, 00:07:57.958 "base_bdevs_list": [ 00:07:57.958 { 00:07:57.958 "name": "pt1", 00:07:57.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.958 "is_configured": true, 00:07:57.958 "data_offset": 2048, 00:07:57.958 "data_size": 63488 00:07:57.958 }, 00:07:57.958 { 00:07:57.958 "name": "pt2", 00:07:57.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.958 "is_configured": true, 00:07:57.958 "data_offset": 2048, 00:07:57.958 "data_size": 63488 00:07:57.958 } 00:07:57.958 ] 00:07:57.958 }' 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.958 17:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.219 [2024-11-20 17:42:25.295752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.219 "name": "raid_bdev1", 00:07:58.219 "aliases": [ 00:07:58.219 "245630cc-885b-447c-b612-05bc0247178f" 00:07:58.219 ], 00:07:58.219 "product_name": "Raid Volume", 00:07:58.219 "block_size": 512, 00:07:58.219 "num_blocks": 126976, 00:07:58.219 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:58.219 "assigned_rate_limits": { 00:07:58.219 "rw_ios_per_sec": 0, 00:07:58.219 "rw_mbytes_per_sec": 0, 00:07:58.219 "r_mbytes_per_sec": 0, 00:07:58.219 "w_mbytes_per_sec": 0 00:07:58.219 }, 00:07:58.219 "claimed": false, 00:07:58.219 "zoned": false, 00:07:58.219 "supported_io_types": { 00:07:58.219 "read": true, 00:07:58.219 "write": true, 00:07:58.219 "unmap": true, 00:07:58.219 "flush": true, 00:07:58.219 "reset": true, 00:07:58.219 "nvme_admin": false, 00:07:58.219 "nvme_io": false, 00:07:58.219 "nvme_io_md": false, 00:07:58.219 "write_zeroes": true, 00:07:58.219 "zcopy": false, 00:07:58.219 "get_zone_info": false, 00:07:58.219 "zone_management": false, 00:07:58.219 "zone_append": false, 00:07:58.219 "compare": false, 00:07:58.219 "compare_and_write": false, 00:07:58.219 "abort": false, 00:07:58.219 "seek_hole": false, 00:07:58.219 "seek_data": false, 00:07:58.219 "copy": false, 00:07:58.219 "nvme_iov_md": false 00:07:58.219 }, 00:07:58.219 "memory_domains": [ 00:07:58.219 { 00:07:58.219 "dma_device_id": "system", 00:07:58.219 "dma_device_type": 1 00:07:58.219 }, 00:07:58.219 { 00:07:58.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.219 "dma_device_type": 2 00:07:58.219 }, 00:07:58.219 { 00:07:58.219 "dma_device_id": "system", 00:07:58.219 "dma_device_type": 1 00:07:58.219 }, 00:07:58.219 { 00:07:58.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.219 "dma_device_type": 2 00:07:58.219 } 00:07:58.219 ], 00:07:58.219 "driver_specific": { 00:07:58.219 "raid": { 00:07:58.219 "uuid": "245630cc-885b-447c-b612-05bc0247178f", 00:07:58.219 "strip_size_kb": 64, 00:07:58.219 "state": "online", 00:07:58.219 "raid_level": "raid0", 00:07:58.219 "superblock": true, 00:07:58.219 "num_base_bdevs": 2, 00:07:58.219 "num_base_bdevs_discovered": 2, 00:07:58.219 "num_base_bdevs_operational": 2, 00:07:58.219 "base_bdevs_list": [ 00:07:58.219 { 00:07:58.219 "name": "pt1", 00:07:58.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.219 "is_configured": true, 00:07:58.219 "data_offset": 2048, 00:07:58.219 "data_size": 63488 00:07:58.219 }, 00:07:58.219 { 00:07:58.219 "name": "pt2", 00:07:58.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.219 "is_configured": true, 00:07:58.219 "data_offset": 2048, 00:07:58.219 "data_size": 63488 00:07:58.219 } 00:07:58.219 ] 00:07:58.219 } 00:07:58.219 } 00:07:58.219 }' 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.219 pt2' 00:07:58.219 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.479 [2024-11-20 17:42:25.503490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 245630cc-885b-447c-b612-05bc0247178f '!=' 245630cc-885b-447c-b612-05bc0247178f ']' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61583 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61583 ']' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61583 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61583 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.479 killing process with pid 61583 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61583' 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61583 00:07:58.479 [2024-11-20 17:42:25.589616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.479 [2024-11-20 17:42:25.589755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.479 17:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61583 00:07:58.479 [2024-11-20 17:42:25.589814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.479 [2024-11-20 17:42:25.589833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.739 [2024-11-20 17:42:25.811718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.119 17:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:00.119 00:08:00.119 real 0m4.699s 00:08:00.119 user 0m6.459s 00:08:00.119 sys 0m0.856s 00:08:00.119 ************************************ 00:08:00.119 END TEST raid_superblock_test 00:08:00.119 ************************************ 00:08:00.119 17:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.119 17:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 17:42:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:00.119 17:42:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.119 17:42:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.119 17:42:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 ************************************ 00:08:00.119 START TEST raid_read_error_test 00:08:00.119 ************************************ 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3OPzsy8SKz 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61794 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61794 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61794 ']' 00:08:00.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.119 17:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 [2024-11-20 17:42:27.216572] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:00.119 [2024-11-20 17:42:27.216707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61794 ] 00:08:00.379 [2024-11-20 17:42:27.393734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.379 [2024-11-20 17:42:27.535239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.639 [2024-11-20 17:42:27.775060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.639 [2024-11-20 17:42:27.775148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.898 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.898 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.898 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.898 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.898 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.898 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.158 BaseBdev1_malloc 00:08:01.158 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.158 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 true 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 [2024-11-20 17:42:28.120873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:01.159 [2024-11-20 17:42:28.121034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.159 [2024-11-20 17:42:28.121061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:01.159 [2024-11-20 17:42:28.121073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.159 [2024-11-20 17:42:28.123499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.159 [2024-11-20 17:42:28.123543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:01.159 BaseBdev1 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 BaseBdev2_malloc 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 true 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 [2024-11-20 17:42:28.196775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:01.159 [2024-11-20 17:42:28.196845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.159 [2024-11-20 17:42:28.196863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:01.159 [2024-11-20 17:42:28.196874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.159 [2024-11-20 17:42:28.199278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.159 [2024-11-20 17:42:28.199318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:01.159 BaseBdev2 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 [2024-11-20 17:42:28.208845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.159 [2024-11-20 17:42:28.210963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.159 [2024-11-20 17:42:28.211278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.159 [2024-11-20 17:42:28.211302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.159 [2024-11-20 17:42:28.211548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:01.159 [2024-11-20 17:42:28.211753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.159 [2024-11-20 17:42:28.211767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:01.159 [2024-11-20 17:42:28.211925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.159 "name": "raid_bdev1", 00:08:01.159 "uuid": "0ad18305-8e62-431c-b580-29f244eed857", 00:08:01.159 "strip_size_kb": 64, 00:08:01.159 "state": "online", 00:08:01.159 "raid_level": "raid0", 00:08:01.159 "superblock": true, 00:08:01.159 "num_base_bdevs": 2, 00:08:01.159 "num_base_bdevs_discovered": 2, 00:08:01.159 "num_base_bdevs_operational": 2, 00:08:01.159 "base_bdevs_list": [ 00:08:01.159 { 00:08:01.159 "name": "BaseBdev1", 00:08:01.159 "uuid": "2dd638ea-f130-525d-9ffd-ca000cb9456f", 00:08:01.159 "is_configured": true, 00:08:01.159 "data_offset": 2048, 00:08:01.159 "data_size": 63488 00:08:01.159 }, 00:08:01.159 { 00:08:01.159 "name": "BaseBdev2", 00:08:01.159 "uuid": "5ea61452-af42-5ee6-b3d6-7eeaa189a0aa", 00:08:01.159 "is_configured": true, 00:08:01.159 "data_offset": 2048, 00:08:01.159 "data_size": 63488 00:08:01.159 } 00:08:01.159 ] 00:08:01.159 }' 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.159 17:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.727 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.727 17:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.727 [2024-11-20 17:42:28.813313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.664 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.665 "name": "raid_bdev1", 00:08:02.665 "uuid": "0ad18305-8e62-431c-b580-29f244eed857", 00:08:02.665 "strip_size_kb": 64, 00:08:02.665 "state": "online", 00:08:02.665 "raid_level": "raid0", 00:08:02.665 "superblock": true, 00:08:02.665 "num_base_bdevs": 2, 00:08:02.665 "num_base_bdevs_discovered": 2, 00:08:02.665 "num_base_bdevs_operational": 2, 00:08:02.665 "base_bdevs_list": [ 00:08:02.665 { 00:08:02.665 "name": "BaseBdev1", 00:08:02.665 "uuid": "2dd638ea-f130-525d-9ffd-ca000cb9456f", 00:08:02.665 "is_configured": true, 00:08:02.665 "data_offset": 2048, 00:08:02.665 "data_size": 63488 00:08:02.665 }, 00:08:02.665 { 00:08:02.665 "name": "BaseBdev2", 00:08:02.665 "uuid": "5ea61452-af42-5ee6-b3d6-7eeaa189a0aa", 00:08:02.665 "is_configured": true, 00:08:02.665 "data_offset": 2048, 00:08:02.665 "data_size": 63488 00:08:02.665 } 00:08:02.665 ] 00:08:02.665 }' 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.665 17:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.234 [2024-11-20 17:42:30.218579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.234 [2024-11-20 17:42:30.218634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.234 [2024-11-20 17:42:30.221403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.234 [2024-11-20 17:42:30.221453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.234 [2024-11-20 17:42:30.221488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.234 [2024-11-20 17:42:30.221501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:03.234 { 00:08:03.234 "results": [ 00:08:03.234 { 00:08:03.234 "job": "raid_bdev1", 00:08:03.234 "core_mask": "0x1", 00:08:03.234 "workload": "randrw", 00:08:03.234 "percentage": 50, 00:08:03.234 "status": "finished", 00:08:03.234 "queue_depth": 1, 00:08:03.234 "io_size": 131072, 00:08:03.234 "runtime": 1.405794, 00:08:03.234 "iops": 14148.58791544138, 00:08:03.234 "mibps": 1768.5734894301725, 00:08:03.234 "io_failed": 1, 00:08:03.234 "io_timeout": 0, 00:08:03.234 "avg_latency_us": 99.38701258101193, 00:08:03.234 "min_latency_us": 25.9353711790393, 00:08:03.234 "max_latency_us": 1459.5353711790392 00:08:03.234 } 00:08:03.234 ], 00:08:03.234 "core_count": 1 00:08:03.234 } 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61794 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61794 ']' 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61794 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61794 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61794' 00:08:03.234 killing process with pid 61794 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61794 00:08:03.234 [2024-11-20 17:42:30.254166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.234 17:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61794 00:08:03.234 [2024-11-20 17:42:30.405167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3OPzsy8SKz 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:04.616 ************************************ 00:08:04.616 END TEST raid_read_error_test 00:08:04.616 ************************************ 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:04.616 00:08:04.616 real 0m4.620s 00:08:04.616 user 0m5.490s 00:08:04.616 sys 0m0.625s 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.616 17:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.616 17:42:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:04.616 17:42:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.616 17:42:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.616 17:42:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.882 ************************************ 00:08:04.882 START TEST raid_write_error_test 00:08:04.882 ************************************ 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vPVjGd4hEa 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61940 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61940 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61940 ']' 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.882 17:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.882 [2024-11-20 17:42:31.916532] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:04.882 [2024-11-20 17:42:31.916802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:08:05.143 [2024-11-20 17:42:32.079568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.143 [2024-11-20 17:42:32.220884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.402 [2024-11-20 17:42:32.467406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.402 [2024-11-20 17:42:32.467540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.663 BaseBdev1_malloc 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.663 true 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.663 [2024-11-20 17:42:32.810270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.663 [2024-11-20 17:42:32.810345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.663 [2024-11-20 17:42:32.810368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.663 [2024-11-20 17:42:32.810381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.663 [2024-11-20 17:42:32.812834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.663 [2024-11-20 17:42:32.812992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.663 BaseBdev1 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.663 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.923 BaseBdev2_malloc 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.923 true 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.923 [2024-11-20 17:42:32.888295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.923 [2024-11-20 17:42:32.888399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.923 [2024-11-20 17:42:32.888422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.923 [2024-11-20 17:42:32.888437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.923 [2024-11-20 17:42:32.891118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.923 [2024-11-20 17:42:32.891184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.923 BaseBdev2 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.923 [2024-11-20 17:42:32.900358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.923 [2024-11-20 17:42:32.902597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.923 [2024-11-20 17:42:32.902827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.923 [2024-11-20 17:42:32.902847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:05.923 [2024-11-20 17:42:32.903133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.923 [2024-11-20 17:42:32.903336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.923 [2024-11-20 17:42:32.903475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.923 [2024-11-20 17:42:32.903673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.923 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.924 "name": "raid_bdev1", 00:08:05.924 "uuid": "58437d3e-1e3e-4f83-9259-026a85afb575", 00:08:05.924 "strip_size_kb": 64, 00:08:05.924 "state": "online", 00:08:05.924 "raid_level": "raid0", 00:08:05.924 "superblock": true, 00:08:05.924 "num_base_bdevs": 2, 00:08:05.924 "num_base_bdevs_discovered": 2, 00:08:05.924 "num_base_bdevs_operational": 2, 00:08:05.924 "base_bdevs_list": [ 00:08:05.924 { 00:08:05.924 "name": "BaseBdev1", 00:08:05.924 "uuid": "51058daa-c7ad-55fd-8a0c-13196d76dc25", 00:08:05.924 "is_configured": true, 00:08:05.924 "data_offset": 2048, 00:08:05.924 "data_size": 63488 00:08:05.924 }, 00:08:05.924 { 00:08:05.924 "name": "BaseBdev2", 00:08:05.924 "uuid": "d42d3c09-966e-5a93-a06a-eeb9853575b0", 00:08:05.924 "is_configured": true, 00:08:05.924 "data_offset": 2048, 00:08:05.924 "data_size": 63488 00:08:05.924 } 00:08:05.924 ] 00:08:05.924 }' 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.924 17:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.493 17:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.493 17:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.493 [2024-11-20 17:42:33.456963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.433 "name": "raid_bdev1", 00:08:07.433 "uuid": "58437d3e-1e3e-4f83-9259-026a85afb575", 00:08:07.433 "strip_size_kb": 64, 00:08:07.433 "state": "online", 00:08:07.433 "raid_level": "raid0", 00:08:07.433 "superblock": true, 00:08:07.433 "num_base_bdevs": 2, 00:08:07.433 "num_base_bdevs_discovered": 2, 00:08:07.433 "num_base_bdevs_operational": 2, 00:08:07.433 "base_bdevs_list": [ 00:08:07.433 { 00:08:07.433 "name": "BaseBdev1", 00:08:07.433 "uuid": "51058daa-c7ad-55fd-8a0c-13196d76dc25", 00:08:07.433 "is_configured": true, 00:08:07.433 "data_offset": 2048, 00:08:07.433 "data_size": 63488 00:08:07.433 }, 00:08:07.433 { 00:08:07.433 "name": "BaseBdev2", 00:08:07.433 "uuid": "d42d3c09-966e-5a93-a06a-eeb9853575b0", 00:08:07.433 "is_configured": true, 00:08:07.433 "data_offset": 2048, 00:08:07.433 "data_size": 63488 00:08:07.433 } 00:08:07.433 ] 00:08:07.433 }' 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.433 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.693 [2024-11-20 17:42:34.850397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.693 [2024-11-20 17:42:34.850456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.693 [2024-11-20 17:42:34.853195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.693 [2024-11-20 17:42:34.853245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.693 [2024-11-20 17:42:34.853285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.693 [2024-11-20 17:42:34.853298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.693 { 00:08:07.693 "results": [ 00:08:07.693 { 00:08:07.693 "job": "raid_bdev1", 00:08:07.693 "core_mask": "0x1", 00:08:07.693 "workload": "randrw", 00:08:07.693 "percentage": 50, 00:08:07.693 "status": "finished", 00:08:07.693 "queue_depth": 1, 00:08:07.693 "io_size": 131072, 00:08:07.693 "runtime": 1.393695, 00:08:07.693 "iops": 12869.386774007226, 00:08:07.693 "mibps": 1608.6733467509032, 00:08:07.693 "io_failed": 1, 00:08:07.693 "io_timeout": 0, 00:08:07.693 "avg_latency_us": 109.21986837482865, 00:08:07.693 "min_latency_us": 25.823580786026202, 00:08:07.693 "max_latency_us": 1559.6995633187773 00:08:07.693 } 00:08:07.693 ], 00:08:07.693 "core_count": 1 00:08:07.693 } 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61940 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61940 ']' 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61940 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:07.693 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.953 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61940 00:08:07.953 killing process with pid 61940 00:08:07.953 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.953 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.953 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61940' 00:08:07.953 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61940 00:08:07.953 [2024-11-20 17:42:34.900764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.953 17:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61940 00:08:07.953 [2024-11-20 17:42:35.054570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vPVjGd4hEa 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:09.331 00:08:09.331 real 0m4.650s 00:08:09.331 user 0m5.464s 00:08:09.331 sys 0m0.635s 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.331 17:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.331 ************************************ 00:08:09.331 END TEST raid_write_error_test 00:08:09.331 ************************************ 00:08:09.331 17:42:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:09.331 17:42:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:09.331 17:42:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.331 17:42:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.331 17:42:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.591 ************************************ 00:08:09.591 START TEST raid_state_function_test 00:08:09.591 ************************************ 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:09.591 Process raid pid: 62088 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62088 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62088' 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62088 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62088 ']' 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.591 17:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.591 [2024-11-20 17:42:36.615581] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:09.591 [2024-11-20 17:42:36.615802] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.851 [2024-11-20 17:42:36.776766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.851 [2024-11-20 17:42:36.930187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.111 [2024-11-20 17:42:37.186326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.111 [2024-11-20 17:42:37.186384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.370 [2024-11-20 17:42:37.504374] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.370 [2024-11-20 17:42:37.504457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.370 [2024-11-20 17:42:37.504469] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.370 [2024-11-20 17:42:37.504480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.370 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.371 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.371 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.371 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.371 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.629 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.629 "name": "Existed_Raid", 00:08:10.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.630 "strip_size_kb": 64, 00:08:10.630 "state": "configuring", 00:08:10.630 "raid_level": "concat", 00:08:10.630 "superblock": false, 00:08:10.630 "num_base_bdevs": 2, 00:08:10.630 "num_base_bdevs_discovered": 0, 00:08:10.630 "num_base_bdevs_operational": 2, 00:08:10.630 "base_bdevs_list": [ 00:08:10.630 { 00:08:10.630 "name": "BaseBdev1", 00:08:10.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.630 "is_configured": false, 00:08:10.630 "data_offset": 0, 00:08:10.630 "data_size": 0 00:08:10.630 }, 00:08:10.630 { 00:08:10.630 "name": "BaseBdev2", 00:08:10.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.630 "is_configured": false, 00:08:10.630 "data_offset": 0, 00:08:10.630 "data_size": 0 00:08:10.630 } 00:08:10.630 ] 00:08:10.630 }' 00:08:10.630 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.630 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 [2024-11-20 17:42:37.935870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.889 [2024-11-20 17:42:37.935933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 [2024-11-20 17:42:37.947786] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.889 [2024-11-20 17:42:37.947877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.889 [2024-11-20 17:42:37.947907] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.889 [2024-11-20 17:42:37.947934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.889 17:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 [2024-11-20 17:42:38.004264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.889 BaseBdev1 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 [ 00:08:10.889 { 00:08:10.889 "name": "BaseBdev1", 00:08:10.889 "aliases": [ 00:08:10.889 "4686a9f7-e911-4eb1-bb8f-446b2ca4b329" 00:08:10.889 ], 00:08:10.889 "product_name": "Malloc disk", 00:08:10.889 "block_size": 512, 00:08:10.889 "num_blocks": 65536, 00:08:10.889 "uuid": "4686a9f7-e911-4eb1-bb8f-446b2ca4b329", 00:08:10.889 "assigned_rate_limits": { 00:08:10.889 "rw_ios_per_sec": 0, 00:08:10.889 "rw_mbytes_per_sec": 0, 00:08:10.889 "r_mbytes_per_sec": 0, 00:08:10.889 "w_mbytes_per_sec": 0 00:08:10.889 }, 00:08:10.889 "claimed": true, 00:08:10.889 "claim_type": "exclusive_write", 00:08:10.889 "zoned": false, 00:08:10.889 "supported_io_types": { 00:08:10.889 "read": true, 00:08:10.889 "write": true, 00:08:10.889 "unmap": true, 00:08:10.889 "flush": true, 00:08:10.889 "reset": true, 00:08:10.889 "nvme_admin": false, 00:08:10.889 "nvme_io": false, 00:08:10.889 "nvme_io_md": false, 00:08:10.889 "write_zeroes": true, 00:08:10.889 "zcopy": true, 00:08:10.889 "get_zone_info": false, 00:08:10.889 "zone_management": false, 00:08:10.889 "zone_append": false, 00:08:10.889 "compare": false, 00:08:10.889 "compare_and_write": false, 00:08:10.889 "abort": true, 00:08:10.889 "seek_hole": false, 00:08:10.889 "seek_data": false, 00:08:10.889 "copy": true, 00:08:10.889 "nvme_iov_md": false 00:08:10.889 }, 00:08:10.889 "memory_domains": [ 00:08:10.889 { 00:08:10.889 "dma_device_id": "system", 00:08:10.889 "dma_device_type": 1 00:08:10.889 }, 00:08:10.889 { 00:08:10.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.889 "dma_device_type": 2 00:08:10.889 } 00:08:10.889 ], 00:08:10.889 "driver_specific": {} 00:08:10.889 } 00:08:10.889 ] 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.889 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.148 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.148 "name": "Existed_Raid", 00:08:11.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.148 "strip_size_kb": 64, 00:08:11.148 "state": "configuring", 00:08:11.148 "raid_level": "concat", 00:08:11.148 "superblock": false, 00:08:11.148 "num_base_bdevs": 2, 00:08:11.148 "num_base_bdevs_discovered": 1, 00:08:11.148 "num_base_bdevs_operational": 2, 00:08:11.148 "base_bdevs_list": [ 00:08:11.148 { 00:08:11.148 "name": "BaseBdev1", 00:08:11.148 "uuid": "4686a9f7-e911-4eb1-bb8f-446b2ca4b329", 00:08:11.148 "is_configured": true, 00:08:11.148 "data_offset": 0, 00:08:11.148 "data_size": 65536 00:08:11.148 }, 00:08:11.148 { 00:08:11.148 "name": "BaseBdev2", 00:08:11.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.148 "is_configured": false, 00:08:11.148 "data_offset": 0, 00:08:11.148 "data_size": 0 00:08:11.148 } 00:08:11.148 ] 00:08:11.148 }' 00:08:11.148 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.148 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.408 [2024-11-20 17:42:38.539431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.408 [2024-11-20 17:42:38.539525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.408 [2024-11-20 17:42:38.551425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.408 [2024-11-20 17:42:38.553611] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.408 [2024-11-20 17:42:38.553741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.408 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.409 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.409 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.409 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.409 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.680 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.680 "name": "Existed_Raid", 00:08:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.680 "strip_size_kb": 64, 00:08:11.680 "state": "configuring", 00:08:11.680 "raid_level": "concat", 00:08:11.680 "superblock": false, 00:08:11.680 "num_base_bdevs": 2, 00:08:11.680 "num_base_bdevs_discovered": 1, 00:08:11.680 "num_base_bdevs_operational": 2, 00:08:11.680 "base_bdevs_list": [ 00:08:11.680 { 00:08:11.680 "name": "BaseBdev1", 00:08:11.680 "uuid": "4686a9f7-e911-4eb1-bb8f-446b2ca4b329", 00:08:11.680 "is_configured": true, 00:08:11.680 "data_offset": 0, 00:08:11.680 "data_size": 65536 00:08:11.680 }, 00:08:11.680 { 00:08:11.680 "name": "BaseBdev2", 00:08:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.680 "is_configured": false, 00:08:11.680 "data_offset": 0, 00:08:11.681 "data_size": 0 00:08:11.681 } 00:08:11.681 ] 00:08:11.681 }' 00:08:11.681 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.681 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 17:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.946 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.946 17:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 [2024-11-20 17:42:39.043402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.946 [2024-11-20 17:42:39.043577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.946 [2024-11-20 17:42:39.043605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:11.946 [2024-11-20 17:42:39.043939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.946 [2024-11-20 17:42:39.044194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.946 [2024-11-20 17:42:39.044243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:11.946 [2024-11-20 17:42:39.044590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.946 BaseBdev2 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.946 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 [ 00:08:11.946 { 00:08:11.946 "name": "BaseBdev2", 00:08:11.946 "aliases": [ 00:08:11.946 "dfe67020-8655-4b2b-9600-64bd6062bf60" 00:08:11.946 ], 00:08:11.946 "product_name": "Malloc disk", 00:08:11.946 "block_size": 512, 00:08:11.946 "num_blocks": 65536, 00:08:11.946 "uuid": "dfe67020-8655-4b2b-9600-64bd6062bf60", 00:08:11.946 "assigned_rate_limits": { 00:08:11.946 "rw_ios_per_sec": 0, 00:08:11.946 "rw_mbytes_per_sec": 0, 00:08:11.946 "r_mbytes_per_sec": 0, 00:08:11.946 "w_mbytes_per_sec": 0 00:08:11.946 }, 00:08:11.946 "claimed": true, 00:08:11.946 "claim_type": "exclusive_write", 00:08:11.946 "zoned": false, 00:08:11.946 "supported_io_types": { 00:08:11.946 "read": true, 00:08:11.946 "write": true, 00:08:11.946 "unmap": true, 00:08:11.946 "flush": true, 00:08:11.946 "reset": true, 00:08:11.946 "nvme_admin": false, 00:08:11.946 "nvme_io": false, 00:08:11.946 "nvme_io_md": false, 00:08:11.946 "write_zeroes": true, 00:08:11.946 "zcopy": true, 00:08:11.946 "get_zone_info": false, 00:08:11.946 "zone_management": false, 00:08:11.946 "zone_append": false, 00:08:11.946 "compare": false, 00:08:11.946 "compare_and_write": false, 00:08:11.946 "abort": true, 00:08:11.946 "seek_hole": false, 00:08:11.946 "seek_data": false, 00:08:11.946 "copy": true, 00:08:11.946 "nvme_iov_md": false 00:08:11.946 }, 00:08:11.946 "memory_domains": [ 00:08:11.946 { 00:08:11.946 "dma_device_id": "system", 00:08:11.946 "dma_device_type": 1 00:08:11.946 }, 00:08:11.946 { 00:08:11.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.946 "dma_device_type": 2 00:08:11.946 } 00:08:11.946 ], 00:08:11.946 "driver_specific": {} 00:08:11.946 } 00:08:11.947 ] 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.947 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.206 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.206 "name": "Existed_Raid", 00:08:12.206 "uuid": "463a3e13-dd78-4f79-abf6-e90568d07c44", 00:08:12.206 "strip_size_kb": 64, 00:08:12.206 "state": "online", 00:08:12.206 "raid_level": "concat", 00:08:12.206 "superblock": false, 00:08:12.206 "num_base_bdevs": 2, 00:08:12.206 "num_base_bdevs_discovered": 2, 00:08:12.206 "num_base_bdevs_operational": 2, 00:08:12.206 "base_bdevs_list": [ 00:08:12.206 { 00:08:12.206 "name": "BaseBdev1", 00:08:12.206 "uuid": "4686a9f7-e911-4eb1-bb8f-446b2ca4b329", 00:08:12.206 "is_configured": true, 00:08:12.206 "data_offset": 0, 00:08:12.206 "data_size": 65536 00:08:12.206 }, 00:08:12.206 { 00:08:12.206 "name": "BaseBdev2", 00:08:12.206 "uuid": "dfe67020-8655-4b2b-9600-64bd6062bf60", 00:08:12.206 "is_configured": true, 00:08:12.206 "data_offset": 0, 00:08:12.206 "data_size": 65536 00:08:12.206 } 00:08:12.206 ] 00:08:12.206 }' 00:08:12.206 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.206 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.465 [2024-11-20 17:42:39.558965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.465 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.465 "name": "Existed_Raid", 00:08:12.465 "aliases": [ 00:08:12.465 "463a3e13-dd78-4f79-abf6-e90568d07c44" 00:08:12.465 ], 00:08:12.465 "product_name": "Raid Volume", 00:08:12.465 "block_size": 512, 00:08:12.465 "num_blocks": 131072, 00:08:12.465 "uuid": "463a3e13-dd78-4f79-abf6-e90568d07c44", 00:08:12.465 "assigned_rate_limits": { 00:08:12.465 "rw_ios_per_sec": 0, 00:08:12.465 "rw_mbytes_per_sec": 0, 00:08:12.465 "r_mbytes_per_sec": 0, 00:08:12.465 "w_mbytes_per_sec": 0 00:08:12.465 }, 00:08:12.465 "claimed": false, 00:08:12.465 "zoned": false, 00:08:12.465 "supported_io_types": { 00:08:12.465 "read": true, 00:08:12.465 "write": true, 00:08:12.465 "unmap": true, 00:08:12.465 "flush": true, 00:08:12.465 "reset": true, 00:08:12.465 "nvme_admin": false, 00:08:12.465 "nvme_io": false, 00:08:12.465 "nvme_io_md": false, 00:08:12.465 "write_zeroes": true, 00:08:12.465 "zcopy": false, 00:08:12.465 "get_zone_info": false, 00:08:12.465 "zone_management": false, 00:08:12.465 "zone_append": false, 00:08:12.465 "compare": false, 00:08:12.465 "compare_and_write": false, 00:08:12.465 "abort": false, 00:08:12.465 "seek_hole": false, 00:08:12.465 "seek_data": false, 00:08:12.465 "copy": false, 00:08:12.465 "nvme_iov_md": false 00:08:12.465 }, 00:08:12.465 "memory_domains": [ 00:08:12.465 { 00:08:12.466 "dma_device_id": "system", 00:08:12.466 "dma_device_type": 1 00:08:12.466 }, 00:08:12.466 { 00:08:12.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.466 "dma_device_type": 2 00:08:12.466 }, 00:08:12.466 { 00:08:12.466 "dma_device_id": "system", 00:08:12.466 "dma_device_type": 1 00:08:12.466 }, 00:08:12.466 { 00:08:12.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.466 "dma_device_type": 2 00:08:12.466 } 00:08:12.466 ], 00:08:12.466 "driver_specific": { 00:08:12.466 "raid": { 00:08:12.466 "uuid": "463a3e13-dd78-4f79-abf6-e90568d07c44", 00:08:12.466 "strip_size_kb": 64, 00:08:12.466 "state": "online", 00:08:12.466 "raid_level": "concat", 00:08:12.466 "superblock": false, 00:08:12.466 "num_base_bdevs": 2, 00:08:12.466 "num_base_bdevs_discovered": 2, 00:08:12.466 "num_base_bdevs_operational": 2, 00:08:12.466 "base_bdevs_list": [ 00:08:12.466 { 00:08:12.466 "name": "BaseBdev1", 00:08:12.466 "uuid": "4686a9f7-e911-4eb1-bb8f-446b2ca4b329", 00:08:12.466 "is_configured": true, 00:08:12.466 "data_offset": 0, 00:08:12.466 "data_size": 65536 00:08:12.466 }, 00:08:12.466 { 00:08:12.466 "name": "BaseBdev2", 00:08:12.466 "uuid": "dfe67020-8655-4b2b-9600-64bd6062bf60", 00:08:12.466 "is_configured": true, 00:08:12.466 "data_offset": 0, 00:08:12.466 "data_size": 65536 00:08:12.466 } 00:08:12.466 ] 00:08:12.466 } 00:08:12.466 } 00:08:12.466 }' 00:08:12.466 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.725 BaseBdev2' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.725 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.725 [2024-11-20 17:42:39.806344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.725 [2024-11-20 17:42:39.806404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.725 [2024-11-20 17:42:39.806478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.984 "name": "Existed_Raid", 00:08:12.984 "uuid": "463a3e13-dd78-4f79-abf6-e90568d07c44", 00:08:12.984 "strip_size_kb": 64, 00:08:12.984 "state": "offline", 00:08:12.984 "raid_level": "concat", 00:08:12.984 "superblock": false, 00:08:12.984 "num_base_bdevs": 2, 00:08:12.984 "num_base_bdevs_discovered": 1, 00:08:12.984 "num_base_bdevs_operational": 1, 00:08:12.984 "base_bdevs_list": [ 00:08:12.984 { 00:08:12.984 "name": null, 00:08:12.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.984 "is_configured": false, 00:08:12.984 "data_offset": 0, 00:08:12.984 "data_size": 65536 00:08:12.984 }, 00:08:12.984 { 00:08:12.984 "name": "BaseBdev2", 00:08:12.984 "uuid": "dfe67020-8655-4b2b-9600-64bd6062bf60", 00:08:12.984 "is_configured": true, 00:08:12.984 "data_offset": 0, 00:08:12.984 "data_size": 65536 00:08:12.984 } 00:08:12.984 ] 00:08:12.984 }' 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.984 17:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.243 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.243 [2024-11-20 17:42:40.411835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.243 [2024-11-20 17:42:40.412026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62088 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62088 ']' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62088 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62088 00:08:13.503 killing process with pid 62088 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62088' 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62088 00:08:13.503 [2024-11-20 17:42:40.621137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.503 17:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62088 00:08:13.503 [2024-11-20 17:42:40.638950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:14.881 00:08:14.881 real 0m5.346s 00:08:14.881 user 0m7.598s 00:08:14.881 sys 0m0.932s 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.881 ************************************ 00:08:14.881 END TEST raid_state_function_test 00:08:14.881 ************************************ 00:08:14.881 17:42:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:14.881 17:42:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.881 17:42:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.881 17:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.881 ************************************ 00:08:14.881 START TEST raid_state_function_test_sb 00:08:14.881 ************************************ 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:14.881 Process raid pid: 62341 00:08:14.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62341 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62341' 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62341 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62341 ']' 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.881 17:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.881 [2024-11-20 17:42:42.019472] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:14.881 [2024-11-20 17:42:42.019693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.139 [2024-11-20 17:42:42.194190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.139 [2024-11-20 17:42:42.311917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.396 [2024-11-20 17:42:42.521514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.396 [2024-11-20 17:42:42.521561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.962 [2024-11-20 17:42:42.893873] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.962 [2024-11-20 17:42:42.893937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.962 [2024-11-20 17:42:42.893953] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.962 [2024-11-20 17:42:42.893964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.962 "name": "Existed_Raid", 00:08:15.962 "uuid": "3bf8e90e-e7c5-4e43-a683-e997646766d6", 00:08:15.962 "strip_size_kb": 64, 00:08:15.962 "state": "configuring", 00:08:15.962 "raid_level": "concat", 00:08:15.962 "superblock": true, 00:08:15.962 "num_base_bdevs": 2, 00:08:15.962 "num_base_bdevs_discovered": 0, 00:08:15.962 "num_base_bdevs_operational": 2, 00:08:15.962 "base_bdevs_list": [ 00:08:15.962 { 00:08:15.962 "name": "BaseBdev1", 00:08:15.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.962 "is_configured": false, 00:08:15.962 "data_offset": 0, 00:08:15.962 "data_size": 0 00:08:15.962 }, 00:08:15.962 { 00:08:15.962 "name": "BaseBdev2", 00:08:15.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.962 "is_configured": false, 00:08:15.962 "data_offset": 0, 00:08:15.962 "data_size": 0 00:08:15.962 } 00:08:15.962 ] 00:08:15.962 }' 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.962 17:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.220 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.220 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.220 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.221 [2024-11-20 17:42:43.345082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.221 [2024-11-20 17:42:43.345190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.221 [2024-11-20 17:42:43.357059] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.221 [2024-11-20 17:42:43.357149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.221 [2024-11-20 17:42:43.357187] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.221 [2024-11-20 17:42:43.357218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.221 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.488 [2024-11-20 17:42:43.406759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.488 BaseBdev1 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.488 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.488 [ 00:08:16.488 { 00:08:16.488 "name": "BaseBdev1", 00:08:16.488 "aliases": [ 00:08:16.488 "b0657b0b-28b3-4c8a-ba64-c5911452ad18" 00:08:16.488 ], 00:08:16.488 "product_name": "Malloc disk", 00:08:16.488 "block_size": 512, 00:08:16.488 "num_blocks": 65536, 00:08:16.488 "uuid": "b0657b0b-28b3-4c8a-ba64-c5911452ad18", 00:08:16.488 "assigned_rate_limits": { 00:08:16.488 "rw_ios_per_sec": 0, 00:08:16.488 "rw_mbytes_per_sec": 0, 00:08:16.488 "r_mbytes_per_sec": 0, 00:08:16.488 "w_mbytes_per_sec": 0 00:08:16.488 }, 00:08:16.488 "claimed": true, 00:08:16.488 "claim_type": "exclusive_write", 00:08:16.488 "zoned": false, 00:08:16.488 "supported_io_types": { 00:08:16.488 "read": true, 00:08:16.488 "write": true, 00:08:16.488 "unmap": true, 00:08:16.488 "flush": true, 00:08:16.488 "reset": true, 00:08:16.488 "nvme_admin": false, 00:08:16.488 "nvme_io": false, 00:08:16.488 "nvme_io_md": false, 00:08:16.488 "write_zeroes": true, 00:08:16.488 "zcopy": true, 00:08:16.488 "get_zone_info": false, 00:08:16.488 "zone_management": false, 00:08:16.488 "zone_append": false, 00:08:16.488 "compare": false, 00:08:16.488 "compare_and_write": false, 00:08:16.488 "abort": true, 00:08:16.488 "seek_hole": false, 00:08:16.488 "seek_data": false, 00:08:16.488 "copy": true, 00:08:16.488 "nvme_iov_md": false 00:08:16.488 }, 00:08:16.488 "memory_domains": [ 00:08:16.488 { 00:08:16.489 "dma_device_id": "system", 00:08:16.489 "dma_device_type": 1 00:08:16.489 }, 00:08:16.489 { 00:08:16.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.489 "dma_device_type": 2 00:08:16.489 } 00:08:16.489 ], 00:08:16.489 "driver_specific": {} 00:08:16.489 } 00:08:16.489 ] 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.489 "name": "Existed_Raid", 00:08:16.489 "uuid": "cb6139dd-6ca8-43ba-8e57-44cf54f6a0f6", 00:08:16.489 "strip_size_kb": 64, 00:08:16.489 "state": "configuring", 00:08:16.489 "raid_level": "concat", 00:08:16.489 "superblock": true, 00:08:16.489 "num_base_bdevs": 2, 00:08:16.489 "num_base_bdevs_discovered": 1, 00:08:16.489 "num_base_bdevs_operational": 2, 00:08:16.489 "base_bdevs_list": [ 00:08:16.489 { 00:08:16.489 "name": "BaseBdev1", 00:08:16.489 "uuid": "b0657b0b-28b3-4c8a-ba64-c5911452ad18", 00:08:16.489 "is_configured": true, 00:08:16.489 "data_offset": 2048, 00:08:16.489 "data_size": 63488 00:08:16.489 }, 00:08:16.489 { 00:08:16.489 "name": "BaseBdev2", 00:08:16.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.489 "is_configured": false, 00:08:16.489 "data_offset": 0, 00:08:16.489 "data_size": 0 00:08:16.489 } 00:08:16.489 ] 00:08:16.489 }' 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.489 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.748 [2024-11-20 17:42:43.874020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.748 [2024-11-20 17:42:43.874101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.748 [2024-11-20 17:42:43.886080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.748 [2024-11-20 17:42:43.887910] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.748 [2024-11-20 17:42:43.887952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.748 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.007 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.007 "name": "Existed_Raid", 00:08:17.007 "uuid": "9f9a97b8-f8f7-4827-b4b3-05375809b097", 00:08:17.007 "strip_size_kb": 64, 00:08:17.007 "state": "configuring", 00:08:17.007 "raid_level": "concat", 00:08:17.007 "superblock": true, 00:08:17.007 "num_base_bdevs": 2, 00:08:17.007 "num_base_bdevs_discovered": 1, 00:08:17.007 "num_base_bdevs_operational": 2, 00:08:17.007 "base_bdevs_list": [ 00:08:17.007 { 00:08:17.007 "name": "BaseBdev1", 00:08:17.007 "uuid": "b0657b0b-28b3-4c8a-ba64-c5911452ad18", 00:08:17.007 "is_configured": true, 00:08:17.007 "data_offset": 2048, 00:08:17.007 "data_size": 63488 00:08:17.007 }, 00:08:17.007 { 00:08:17.007 "name": "BaseBdev2", 00:08:17.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.007 "is_configured": false, 00:08:17.007 "data_offset": 0, 00:08:17.007 "data_size": 0 00:08:17.007 } 00:08:17.007 ] 00:08:17.007 }' 00:08:17.007 17:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.007 17:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.268 [2024-11-20 17:42:44.344177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.268 [2024-11-20 17:42:44.344531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.268 [2024-11-20 17:42:44.344590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:17.268 [2024-11-20 17:42:44.344866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.268 [2024-11-20 17:42:44.345105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.268 [2024-11-20 17:42:44.345159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.268 BaseBdev2 00:08:17.268 [2024-11-20 17:42:44.345388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.268 [ 00:08:17.268 { 00:08:17.268 "name": "BaseBdev2", 00:08:17.268 "aliases": [ 00:08:17.268 "da56322f-4c12-4b5c-8ecb-cb3027d177c9" 00:08:17.268 ], 00:08:17.268 "product_name": "Malloc disk", 00:08:17.268 "block_size": 512, 00:08:17.268 "num_blocks": 65536, 00:08:17.268 "uuid": "da56322f-4c12-4b5c-8ecb-cb3027d177c9", 00:08:17.268 "assigned_rate_limits": { 00:08:17.268 "rw_ios_per_sec": 0, 00:08:17.268 "rw_mbytes_per_sec": 0, 00:08:17.268 "r_mbytes_per_sec": 0, 00:08:17.268 "w_mbytes_per_sec": 0 00:08:17.268 }, 00:08:17.268 "claimed": true, 00:08:17.268 "claim_type": "exclusive_write", 00:08:17.268 "zoned": false, 00:08:17.268 "supported_io_types": { 00:08:17.268 "read": true, 00:08:17.268 "write": true, 00:08:17.268 "unmap": true, 00:08:17.268 "flush": true, 00:08:17.268 "reset": true, 00:08:17.268 "nvme_admin": false, 00:08:17.268 "nvme_io": false, 00:08:17.268 "nvme_io_md": false, 00:08:17.268 "write_zeroes": true, 00:08:17.268 "zcopy": true, 00:08:17.268 "get_zone_info": false, 00:08:17.268 "zone_management": false, 00:08:17.268 "zone_append": false, 00:08:17.268 "compare": false, 00:08:17.268 "compare_and_write": false, 00:08:17.268 "abort": true, 00:08:17.268 "seek_hole": false, 00:08:17.268 "seek_data": false, 00:08:17.268 "copy": true, 00:08:17.268 "nvme_iov_md": false 00:08:17.268 }, 00:08:17.268 "memory_domains": [ 00:08:17.268 { 00:08:17.268 "dma_device_id": "system", 00:08:17.268 "dma_device_type": 1 00:08:17.268 }, 00:08:17.268 { 00:08:17.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.268 "dma_device_type": 2 00:08:17.268 } 00:08:17.268 ], 00:08:17.268 "driver_specific": {} 00:08:17.268 } 00:08:17.268 ] 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.268 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.268 "name": "Existed_Raid", 00:08:17.268 "uuid": "9f9a97b8-f8f7-4827-b4b3-05375809b097", 00:08:17.268 "strip_size_kb": 64, 00:08:17.268 "state": "online", 00:08:17.268 "raid_level": "concat", 00:08:17.268 "superblock": true, 00:08:17.268 "num_base_bdevs": 2, 00:08:17.268 "num_base_bdevs_discovered": 2, 00:08:17.268 "num_base_bdevs_operational": 2, 00:08:17.268 "base_bdevs_list": [ 00:08:17.268 { 00:08:17.268 "name": "BaseBdev1", 00:08:17.268 "uuid": "b0657b0b-28b3-4c8a-ba64-c5911452ad18", 00:08:17.268 "is_configured": true, 00:08:17.268 "data_offset": 2048, 00:08:17.268 "data_size": 63488 00:08:17.268 }, 00:08:17.268 { 00:08:17.268 "name": "BaseBdev2", 00:08:17.268 "uuid": "da56322f-4c12-4b5c-8ecb-cb3027d177c9", 00:08:17.268 "is_configured": true, 00:08:17.268 "data_offset": 2048, 00:08:17.268 "data_size": 63488 00:08:17.268 } 00:08:17.269 ] 00:08:17.269 }' 00:08:17.269 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.269 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.837 [2024-11-20 17:42:44.871572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.837 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.837 "name": "Existed_Raid", 00:08:17.837 "aliases": [ 00:08:17.837 "9f9a97b8-f8f7-4827-b4b3-05375809b097" 00:08:17.837 ], 00:08:17.837 "product_name": "Raid Volume", 00:08:17.837 "block_size": 512, 00:08:17.837 "num_blocks": 126976, 00:08:17.837 "uuid": "9f9a97b8-f8f7-4827-b4b3-05375809b097", 00:08:17.837 "assigned_rate_limits": { 00:08:17.837 "rw_ios_per_sec": 0, 00:08:17.837 "rw_mbytes_per_sec": 0, 00:08:17.837 "r_mbytes_per_sec": 0, 00:08:17.837 "w_mbytes_per_sec": 0 00:08:17.837 }, 00:08:17.837 "claimed": false, 00:08:17.837 "zoned": false, 00:08:17.837 "supported_io_types": { 00:08:17.837 "read": true, 00:08:17.837 "write": true, 00:08:17.837 "unmap": true, 00:08:17.837 "flush": true, 00:08:17.838 "reset": true, 00:08:17.838 "nvme_admin": false, 00:08:17.838 "nvme_io": false, 00:08:17.838 "nvme_io_md": false, 00:08:17.838 "write_zeroes": true, 00:08:17.838 "zcopy": false, 00:08:17.838 "get_zone_info": false, 00:08:17.838 "zone_management": false, 00:08:17.838 "zone_append": false, 00:08:17.838 "compare": false, 00:08:17.838 "compare_and_write": false, 00:08:17.838 "abort": false, 00:08:17.838 "seek_hole": false, 00:08:17.838 "seek_data": false, 00:08:17.838 "copy": false, 00:08:17.838 "nvme_iov_md": false 00:08:17.838 }, 00:08:17.838 "memory_domains": [ 00:08:17.838 { 00:08:17.838 "dma_device_id": "system", 00:08:17.838 "dma_device_type": 1 00:08:17.838 }, 00:08:17.838 { 00:08:17.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.838 "dma_device_type": 2 00:08:17.838 }, 00:08:17.838 { 00:08:17.838 "dma_device_id": "system", 00:08:17.838 "dma_device_type": 1 00:08:17.838 }, 00:08:17.838 { 00:08:17.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.838 "dma_device_type": 2 00:08:17.838 } 00:08:17.838 ], 00:08:17.838 "driver_specific": { 00:08:17.838 "raid": { 00:08:17.838 "uuid": "9f9a97b8-f8f7-4827-b4b3-05375809b097", 00:08:17.838 "strip_size_kb": 64, 00:08:17.838 "state": "online", 00:08:17.838 "raid_level": "concat", 00:08:17.838 "superblock": true, 00:08:17.838 "num_base_bdevs": 2, 00:08:17.838 "num_base_bdevs_discovered": 2, 00:08:17.838 "num_base_bdevs_operational": 2, 00:08:17.838 "base_bdevs_list": [ 00:08:17.838 { 00:08:17.838 "name": "BaseBdev1", 00:08:17.838 "uuid": "b0657b0b-28b3-4c8a-ba64-c5911452ad18", 00:08:17.838 "is_configured": true, 00:08:17.838 "data_offset": 2048, 00:08:17.838 "data_size": 63488 00:08:17.838 }, 00:08:17.838 { 00:08:17.838 "name": "BaseBdev2", 00:08:17.838 "uuid": "da56322f-4c12-4b5c-8ecb-cb3027d177c9", 00:08:17.838 "is_configured": true, 00:08:17.838 "data_offset": 2048, 00:08:17.838 "data_size": 63488 00:08:17.838 } 00:08:17.838 ] 00:08:17.838 } 00:08:17.838 } 00:08:17.838 }' 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.838 BaseBdev2' 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.838 17:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 [2024-11-20 17:42:45.075051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.099 [2024-11-20 17:42:45.075085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.099 [2024-11-20 17:42:45.075137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.099 "name": "Existed_Raid", 00:08:18.099 "uuid": "9f9a97b8-f8f7-4827-b4b3-05375809b097", 00:08:18.099 "strip_size_kb": 64, 00:08:18.099 "state": "offline", 00:08:18.099 "raid_level": "concat", 00:08:18.099 "superblock": true, 00:08:18.099 "num_base_bdevs": 2, 00:08:18.099 "num_base_bdevs_discovered": 1, 00:08:18.099 "num_base_bdevs_operational": 1, 00:08:18.099 "base_bdevs_list": [ 00:08:18.099 { 00:08:18.099 "name": null, 00:08:18.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.099 "is_configured": false, 00:08:18.099 "data_offset": 0, 00:08:18.099 "data_size": 63488 00:08:18.099 }, 00:08:18.099 { 00:08:18.099 "name": "BaseBdev2", 00:08:18.099 "uuid": "da56322f-4c12-4b5c-8ecb-cb3027d177c9", 00:08:18.099 "is_configured": true, 00:08:18.099 "data_offset": 2048, 00:08:18.099 "data_size": 63488 00:08:18.099 } 00:08:18.099 ] 00:08:18.099 }' 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.099 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.668 [2024-11-20 17:42:45.612544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.668 [2024-11-20 17:42:45.612600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.668 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62341 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62341 ']' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62341 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62341 00:08:18.669 killing process with pid 62341 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62341' 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62341 00:08:18.669 [2024-11-20 17:42:45.809108] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.669 17:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62341 00:08:18.669 [2024-11-20 17:42:45.826450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.048 17:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.048 00:08:20.048 real 0m5.037s 00:08:20.048 user 0m7.230s 00:08:20.048 sys 0m0.842s 00:08:20.048 17:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.048 17:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.048 ************************************ 00:08:20.048 END TEST raid_state_function_test_sb 00:08:20.048 ************************************ 00:08:20.048 17:42:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:20.048 17:42:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:20.048 17:42:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.048 17:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.048 ************************************ 00:08:20.048 START TEST raid_superblock_test 00:08:20.048 ************************************ 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:20.048 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62589 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62589 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62589 ']' 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.049 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.049 [2024-11-20 17:42:47.120089] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:20.049 [2024-11-20 17:42:47.120287] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62589 ] 00:08:20.308 [2024-11-20 17:42:47.297924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.308 [2024-11-20 17:42:47.413187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.579 [2024-11-20 17:42:47.615576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.579 [2024-11-20 17:42:47.615732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.852 17:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.852 malloc1 00:08:20.852 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.852 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.852 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.852 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.852 [2024-11-20 17:42:48.018482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.852 [2024-11-20 17:42:48.018548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.852 [2024-11-20 17:42:48.018581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.852 [2024-11-20 17:42:48.018590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.852 [2024-11-20 17:42:48.020812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.852 [2024-11-20 17:42:48.020850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.852 pt1 00:08:20.852 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:20.853 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.112 malloc2 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.112 [2024-11-20 17:42:48.073426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.112 [2024-11-20 17:42:48.073534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.112 [2024-11-20 17:42:48.073578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:21.112 [2024-11-20 17:42:48.073609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.112 [2024-11-20 17:42:48.075756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.112 [2024-11-20 17:42:48.075826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.112 pt2 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.112 [2024-11-20 17:42:48.085460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.112 [2024-11-20 17:42:48.087290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.112 [2024-11-20 17:42:48.087496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:21.112 [2024-11-20 17:42:48.087543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:21.112 [2024-11-20 17:42:48.087846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.112 [2024-11-20 17:42:48.088070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:21.112 [2024-11-20 17:42:48.088119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:21.112 [2024-11-20 17:42:48.088310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.112 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.113 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.113 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.113 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.113 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.113 "name": "raid_bdev1", 00:08:21.113 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:21.113 "strip_size_kb": 64, 00:08:21.113 "state": "online", 00:08:21.113 "raid_level": "concat", 00:08:21.113 "superblock": true, 00:08:21.113 "num_base_bdevs": 2, 00:08:21.113 "num_base_bdevs_discovered": 2, 00:08:21.113 "num_base_bdevs_operational": 2, 00:08:21.113 "base_bdevs_list": [ 00:08:21.113 { 00:08:21.113 "name": "pt1", 00:08:21.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.113 "is_configured": true, 00:08:21.113 "data_offset": 2048, 00:08:21.113 "data_size": 63488 00:08:21.113 }, 00:08:21.113 { 00:08:21.113 "name": "pt2", 00:08:21.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.113 "is_configured": true, 00:08:21.113 "data_offset": 2048, 00:08:21.113 "data_size": 63488 00:08:21.113 } 00:08:21.113 ] 00:08:21.113 }' 00:08:21.113 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.113 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 [2024-11-20 17:42:48.520994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.373 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.632 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.632 "name": "raid_bdev1", 00:08:21.632 "aliases": [ 00:08:21.632 "f2f28627-41ab-44d0-8ae7-3868f0e974ad" 00:08:21.632 ], 00:08:21.632 "product_name": "Raid Volume", 00:08:21.632 "block_size": 512, 00:08:21.632 "num_blocks": 126976, 00:08:21.632 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:21.632 "assigned_rate_limits": { 00:08:21.632 "rw_ios_per_sec": 0, 00:08:21.632 "rw_mbytes_per_sec": 0, 00:08:21.632 "r_mbytes_per_sec": 0, 00:08:21.632 "w_mbytes_per_sec": 0 00:08:21.632 }, 00:08:21.632 "claimed": false, 00:08:21.632 "zoned": false, 00:08:21.632 "supported_io_types": { 00:08:21.632 "read": true, 00:08:21.632 "write": true, 00:08:21.632 "unmap": true, 00:08:21.632 "flush": true, 00:08:21.632 "reset": true, 00:08:21.632 "nvme_admin": false, 00:08:21.632 "nvme_io": false, 00:08:21.632 "nvme_io_md": false, 00:08:21.632 "write_zeroes": true, 00:08:21.632 "zcopy": false, 00:08:21.632 "get_zone_info": false, 00:08:21.632 "zone_management": false, 00:08:21.632 "zone_append": false, 00:08:21.632 "compare": false, 00:08:21.632 "compare_and_write": false, 00:08:21.632 "abort": false, 00:08:21.632 "seek_hole": false, 00:08:21.632 "seek_data": false, 00:08:21.632 "copy": false, 00:08:21.632 "nvme_iov_md": false 00:08:21.632 }, 00:08:21.633 "memory_domains": [ 00:08:21.633 { 00:08:21.633 "dma_device_id": "system", 00:08:21.633 "dma_device_type": 1 00:08:21.633 }, 00:08:21.633 { 00:08:21.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.633 "dma_device_type": 2 00:08:21.633 }, 00:08:21.633 { 00:08:21.633 "dma_device_id": "system", 00:08:21.633 "dma_device_type": 1 00:08:21.633 }, 00:08:21.633 { 00:08:21.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.633 "dma_device_type": 2 00:08:21.633 } 00:08:21.633 ], 00:08:21.633 "driver_specific": { 00:08:21.633 "raid": { 00:08:21.633 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:21.633 "strip_size_kb": 64, 00:08:21.633 "state": "online", 00:08:21.633 "raid_level": "concat", 00:08:21.633 "superblock": true, 00:08:21.633 "num_base_bdevs": 2, 00:08:21.633 "num_base_bdevs_discovered": 2, 00:08:21.633 "num_base_bdevs_operational": 2, 00:08:21.633 "base_bdevs_list": [ 00:08:21.633 { 00:08:21.633 "name": "pt1", 00:08:21.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.633 "is_configured": true, 00:08:21.633 "data_offset": 2048, 00:08:21.633 "data_size": 63488 00:08:21.633 }, 00:08:21.633 { 00:08:21.633 "name": "pt2", 00:08:21.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.633 "is_configured": true, 00:08:21.633 "data_offset": 2048, 00:08:21.633 "data_size": 63488 00:08:21.633 } 00:08:21.633 ] 00:08:21.633 } 00:08:21.633 } 00:08:21.633 }' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.633 pt2' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:21.633 [2024-11-20 17:42:48.744694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f2f28627-41ab-44d0-8ae7-3868f0e974ad 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f2f28627-41ab-44d0-8ae7-3868f0e974ad ']' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.633 [2024-11-20 17:42:48.792268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.633 [2024-11-20 17:42:48.792299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.633 [2024-11-20 17:42:48.792404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.633 [2024-11-20 17:42:48.792458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.633 [2024-11-20 17:42:48.792472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.633 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.892 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 [2024-11-20 17:42:48.924105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:21.893 [2024-11-20 17:42:48.926247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:21.893 [2024-11-20 17:42:48.926311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:21.893 [2024-11-20 17:42:48.926369] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:21.893 [2024-11-20 17:42:48.926383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.893 [2024-11-20 17:42:48.926393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:21.893 request: 00:08:21.893 { 00:08:21.893 "name": "raid_bdev1", 00:08:21.893 "raid_level": "concat", 00:08:21.893 "base_bdevs": [ 00:08:21.893 "malloc1", 00:08:21.893 "malloc2" 00:08:21.893 ], 00:08:21.893 "strip_size_kb": 64, 00:08:21.893 "superblock": false, 00:08:21.893 "method": "bdev_raid_create", 00:08:21.893 "req_id": 1 00:08:21.893 } 00:08:21.893 Got JSON-RPC error response 00:08:21.893 response: 00:08:21.893 { 00:08:21.893 "code": -17, 00:08:21.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:21.893 } 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 [2024-11-20 17:42:48.987937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.893 [2024-11-20 17:42:48.988058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.893 [2024-11-20 17:42:48.988100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:21.893 [2024-11-20 17:42:48.988143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.893 [2024-11-20 17:42:48.990497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.893 [2024-11-20 17:42:48.990590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.893 [2024-11-20 17:42:48.990707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:21.893 [2024-11-20 17:42:48.990799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.893 pt1 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.893 17:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.893 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.893 "name": "raid_bdev1", 00:08:21.893 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:21.893 "strip_size_kb": 64, 00:08:21.893 "state": "configuring", 00:08:21.893 "raid_level": "concat", 00:08:21.893 "superblock": true, 00:08:21.893 "num_base_bdevs": 2, 00:08:21.893 "num_base_bdevs_discovered": 1, 00:08:21.893 "num_base_bdevs_operational": 2, 00:08:21.893 "base_bdevs_list": [ 00:08:21.893 { 00:08:21.893 "name": "pt1", 00:08:21.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.893 "is_configured": true, 00:08:21.893 "data_offset": 2048, 00:08:21.893 "data_size": 63488 00:08:21.893 }, 00:08:21.893 { 00:08:21.893 "name": null, 00:08:21.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.893 "is_configured": false, 00:08:21.893 "data_offset": 2048, 00:08:21.893 "data_size": 63488 00:08:21.893 } 00:08:21.893 ] 00:08:21.893 }' 00:08:21.893 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.893 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 [2024-11-20 17:42:49.443178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.462 [2024-11-20 17:42:49.443255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.462 [2024-11-20 17:42:49.443280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:22.462 [2024-11-20 17:42:49.443291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.462 [2024-11-20 17:42:49.443743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.462 [2024-11-20 17:42:49.443762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.462 [2024-11-20 17:42:49.443844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:22.462 [2024-11-20 17:42:49.443871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.462 [2024-11-20 17:42:49.443985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.462 [2024-11-20 17:42:49.443996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.462 [2024-11-20 17:42:49.444283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.462 [2024-11-20 17:42:49.444454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.462 [2024-11-20 17:42:49.444464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.462 [2024-11-20 17:42:49.444623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.462 pt2 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.462 "name": "raid_bdev1", 00:08:22.462 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:22.462 "strip_size_kb": 64, 00:08:22.462 "state": "online", 00:08:22.462 "raid_level": "concat", 00:08:22.462 "superblock": true, 00:08:22.462 "num_base_bdevs": 2, 00:08:22.462 "num_base_bdevs_discovered": 2, 00:08:22.462 "num_base_bdevs_operational": 2, 00:08:22.462 "base_bdevs_list": [ 00:08:22.462 { 00:08:22.462 "name": "pt1", 00:08:22.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.462 "is_configured": true, 00:08:22.462 "data_offset": 2048, 00:08:22.462 "data_size": 63488 00:08:22.462 }, 00:08:22.462 { 00:08:22.462 "name": "pt2", 00:08:22.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.462 "is_configured": true, 00:08:22.462 "data_offset": 2048, 00:08:22.462 "data_size": 63488 00:08:22.462 } 00:08:22.462 ] 00:08:22.462 }' 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.462 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 [2024-11-20 17:42:49.870695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.722 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.722 "name": "raid_bdev1", 00:08:22.722 "aliases": [ 00:08:22.722 "f2f28627-41ab-44d0-8ae7-3868f0e974ad" 00:08:22.722 ], 00:08:22.722 "product_name": "Raid Volume", 00:08:22.722 "block_size": 512, 00:08:22.722 "num_blocks": 126976, 00:08:22.722 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:22.722 "assigned_rate_limits": { 00:08:22.722 "rw_ios_per_sec": 0, 00:08:22.722 "rw_mbytes_per_sec": 0, 00:08:22.722 "r_mbytes_per_sec": 0, 00:08:22.722 "w_mbytes_per_sec": 0 00:08:22.722 }, 00:08:22.722 "claimed": false, 00:08:22.722 "zoned": false, 00:08:22.722 "supported_io_types": { 00:08:22.722 "read": true, 00:08:22.722 "write": true, 00:08:22.722 "unmap": true, 00:08:22.722 "flush": true, 00:08:22.722 "reset": true, 00:08:22.722 "nvme_admin": false, 00:08:22.722 "nvme_io": false, 00:08:22.722 "nvme_io_md": false, 00:08:22.722 "write_zeroes": true, 00:08:22.722 "zcopy": false, 00:08:22.722 "get_zone_info": false, 00:08:22.722 "zone_management": false, 00:08:22.722 "zone_append": false, 00:08:22.722 "compare": false, 00:08:22.722 "compare_and_write": false, 00:08:22.722 "abort": false, 00:08:22.722 "seek_hole": false, 00:08:22.722 "seek_data": false, 00:08:22.722 "copy": false, 00:08:22.722 "nvme_iov_md": false 00:08:22.722 }, 00:08:22.722 "memory_domains": [ 00:08:22.722 { 00:08:22.722 "dma_device_id": "system", 00:08:22.722 "dma_device_type": 1 00:08:22.722 }, 00:08:22.722 { 00:08:22.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.722 "dma_device_type": 2 00:08:22.722 }, 00:08:22.722 { 00:08:22.722 "dma_device_id": "system", 00:08:22.722 "dma_device_type": 1 00:08:22.722 }, 00:08:22.722 { 00:08:22.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.722 "dma_device_type": 2 00:08:22.722 } 00:08:22.722 ], 00:08:22.722 "driver_specific": { 00:08:22.722 "raid": { 00:08:22.722 "uuid": "f2f28627-41ab-44d0-8ae7-3868f0e974ad", 00:08:22.722 "strip_size_kb": 64, 00:08:22.722 "state": "online", 00:08:22.722 "raid_level": "concat", 00:08:22.722 "superblock": true, 00:08:22.722 "num_base_bdevs": 2, 00:08:22.722 "num_base_bdevs_discovered": 2, 00:08:22.722 "num_base_bdevs_operational": 2, 00:08:22.722 "base_bdevs_list": [ 00:08:22.722 { 00:08:22.722 "name": "pt1", 00:08:22.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.722 "is_configured": true, 00:08:22.722 "data_offset": 2048, 00:08:22.722 "data_size": 63488 00:08:22.722 }, 00:08:22.722 { 00:08:22.722 "name": "pt2", 00:08:22.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.722 "is_configured": true, 00:08:22.722 "data_offset": 2048, 00:08:22.722 "data_size": 63488 00:08:22.722 } 00:08:22.722 ] 00:08:22.722 } 00:08:22.722 } 00:08:22.722 }' 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:22.982 pt2' 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.982 17:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:22.982 [2024-11-20 17:42:50.102345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f2f28627-41ab-44d0-8ae7-3868f0e974ad '!=' f2f28627-41ab-44d0-8ae7-3868f0e974ad ']' 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62589 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62589 ']' 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62589 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:22.982 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.241 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62589 00:08:23.241 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.241 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.241 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62589' 00:08:23.241 killing process with pid 62589 00:08:23.241 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62589 00:08:23.241 [2024-11-20 17:42:50.189905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.241 [2024-11-20 17:42:50.190066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.241 17:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62589 00:08:23.241 [2024-11-20 17:42:50.190147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.241 [2024-11-20 17:42:50.190164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.241 [2024-11-20 17:42:50.392603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.621 17:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:24.621 00:08:24.621 real 0m4.495s 00:08:24.621 user 0m6.279s 00:08:24.621 sys 0m0.762s 00:08:24.621 17:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.621 17:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 ************************************ 00:08:24.621 END TEST raid_superblock_test 00:08:24.621 ************************************ 00:08:24.621 17:42:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:24.621 17:42:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.621 17:42:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.621 17:42:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 ************************************ 00:08:24.621 START TEST raid_read_error_test 00:08:24.621 ************************************ 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KASXXEvlrG 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62799 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62799 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62799 ']' 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.621 17:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 [2024-11-20 17:42:51.706519] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:24.621 [2024-11-20 17:42:51.706652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62799 ] 00:08:24.907 [2024-11-20 17:42:51.883386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.907 [2024-11-20 17:42:51.997830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.166 [2024-11-20 17:42:52.203713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.166 [2024-11-20 17:42:52.203783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.426 BaseBdev1_malloc 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.426 true 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.426 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.685 [2024-11-20 17:42:52.602762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.685 [2024-11-20 17:42:52.602824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.685 [2024-11-20 17:42:52.602845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.686 [2024-11-20 17:42:52.602872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.686 [2024-11-20 17:42:52.605092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.686 [2024-11-20 17:42:52.605132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.686 BaseBdev1 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.686 BaseBdev2_malloc 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.686 true 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.686 [2024-11-20 17:42:52.670276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.686 [2024-11-20 17:42:52.670347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.686 [2024-11-20 17:42:52.670363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.686 [2024-11-20 17:42:52.670374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.686 [2024-11-20 17:42:52.672440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.686 [2024-11-20 17:42:52.672477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.686 BaseBdev2 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.686 [2024-11-20 17:42:52.682308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.686 [2024-11-20 17:42:52.684058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.686 [2024-11-20 17:42:52.684251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.686 [2024-11-20 17:42:52.684266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:25.686 [2024-11-20 17:42:52.684540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:25.686 [2024-11-20 17:42:52.684711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.686 [2024-11-20 17:42:52.684723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.686 [2024-11-20 17:42:52.684872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.686 "name": "raid_bdev1", 00:08:25.686 "uuid": "d1da4791-318b-4bc5-9e08-81f8187227c3", 00:08:25.686 "strip_size_kb": 64, 00:08:25.686 "state": "online", 00:08:25.686 "raid_level": "concat", 00:08:25.686 "superblock": true, 00:08:25.686 "num_base_bdevs": 2, 00:08:25.686 "num_base_bdevs_discovered": 2, 00:08:25.686 "num_base_bdevs_operational": 2, 00:08:25.686 "base_bdevs_list": [ 00:08:25.686 { 00:08:25.686 "name": "BaseBdev1", 00:08:25.686 "uuid": "a9a87c3e-91ba-59a9-9c71-cb9aa5346e10", 00:08:25.686 "is_configured": true, 00:08:25.686 "data_offset": 2048, 00:08:25.686 "data_size": 63488 00:08:25.686 }, 00:08:25.686 { 00:08:25.686 "name": "BaseBdev2", 00:08:25.686 "uuid": "eed95bd5-7179-5de9-a5fb-91b96011539b", 00:08:25.686 "is_configured": true, 00:08:25.686 "data_offset": 2048, 00:08:25.686 "data_size": 63488 00:08:25.686 } 00:08:25.686 ] 00:08:25.686 }' 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.686 17:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.255 17:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.255 17:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:26.255 [2024-11-20 17:42:53.230634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.192 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.192 "name": "raid_bdev1", 00:08:27.192 "uuid": "d1da4791-318b-4bc5-9e08-81f8187227c3", 00:08:27.192 "strip_size_kb": 64, 00:08:27.192 "state": "online", 00:08:27.192 "raid_level": "concat", 00:08:27.192 "superblock": true, 00:08:27.192 "num_base_bdevs": 2, 00:08:27.192 "num_base_bdevs_discovered": 2, 00:08:27.192 "num_base_bdevs_operational": 2, 00:08:27.192 "base_bdevs_list": [ 00:08:27.192 { 00:08:27.192 "name": "BaseBdev1", 00:08:27.192 "uuid": "a9a87c3e-91ba-59a9-9c71-cb9aa5346e10", 00:08:27.192 "is_configured": true, 00:08:27.192 "data_offset": 2048, 00:08:27.193 "data_size": 63488 00:08:27.193 }, 00:08:27.193 { 00:08:27.193 "name": "BaseBdev2", 00:08:27.193 "uuid": "eed95bd5-7179-5de9-a5fb-91b96011539b", 00:08:27.193 "is_configured": true, 00:08:27.193 "data_offset": 2048, 00:08:27.193 "data_size": 63488 00:08:27.193 } 00:08:27.193 ] 00:08:27.193 }' 00:08:27.193 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.193 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.451 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.451 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.451 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.451 [2024-11-20 17:42:54.614852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.451 [2024-11-20 17:42:54.614965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.451 [2024-11-20 17:42:54.618245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.451 [2024-11-20 17:42:54.618326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.451 [2024-11-20 17:42:54.618418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.451 [2024-11-20 17:42:54.618491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.451 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.451 { 00:08:27.451 "results": [ 00:08:27.451 { 00:08:27.451 "job": "raid_bdev1", 00:08:27.451 "core_mask": "0x1", 00:08:27.451 "workload": "randrw", 00:08:27.451 "percentage": 50, 00:08:27.451 "status": "finished", 00:08:27.451 "queue_depth": 1, 00:08:27.451 "io_size": 131072, 00:08:27.451 "runtime": 1.385229, 00:08:27.451 "iops": 15658.78277165725, 00:08:27.451 "mibps": 1957.3478464571563, 00:08:27.451 "io_failed": 1, 00:08:27.451 "io_timeout": 0, 00:08:27.452 "avg_latency_us": 88.44487726946606, 00:08:27.452 "min_latency_us": 25.7117903930131, 00:08:27.452 "max_latency_us": 1745.7187772925763 00:08:27.452 } 00:08:27.452 ], 00:08:27.452 "core_count": 1 00:08:27.452 } 00:08:27.452 17:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62799 00:08:27.452 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62799 ']' 00:08:27.452 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62799 00:08:27.452 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62799 00:08:27.710 killing process with pid 62799 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62799' 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62799 00:08:27.710 [2024-11-20 17:42:54.665277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.710 17:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62799 00:08:27.710 [2024-11-20 17:42:54.807206] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KASXXEvlrG 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.087 ************************************ 00:08:29.087 END TEST raid_read_error_test 00:08:29.087 ************************************ 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:29.087 00:08:29.087 real 0m4.447s 00:08:29.087 user 0m5.337s 00:08:29.087 sys 0m0.550s 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.087 17:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 17:42:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:29.087 17:42:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.087 17:42:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.087 17:42:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 ************************************ 00:08:29.087 START TEST raid_write_error_test 00:08:29.087 ************************************ 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.moPboRXtwF 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62946 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62946 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62946 ']' 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.087 17:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 [2024-11-20 17:42:56.221373] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:29.087 [2024-11-20 17:42:56.221568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62946 ] 00:08:29.348 [2024-11-20 17:42:56.395080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.348 [2024-11-20 17:42:56.511499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.608 [2024-11-20 17:42:56.713379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.608 [2024-11-20 17:42:56.713536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 BaseBdev1_malloc 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 true 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 [2024-11-20 17:42:57.131824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:30.177 [2024-11-20 17:42:57.131908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.177 [2024-11-20 17:42:57.131933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:30.177 [2024-11-20 17:42:57.131944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.177 [2024-11-20 17:42:57.134307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.177 [2024-11-20 17:42:57.134354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:30.177 BaseBdev1 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 BaseBdev2_malloc 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 true 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 [2024-11-20 17:42:57.197926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:30.177 [2024-11-20 17:42:57.198043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.177 [2024-11-20 17:42:57.198083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:30.177 [2024-11-20 17:42:57.198096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.177 [2024-11-20 17:42:57.200302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.177 [2024-11-20 17:42:57.200344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:30.177 BaseBdev2 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 [2024-11-20 17:42:57.209975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.177 [2024-11-20 17:42:57.211799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.177 [2024-11-20 17:42:57.211989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:30.177 [2024-11-20 17:42:57.212004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:30.177 [2024-11-20 17:42:57.212291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:30.177 [2024-11-20 17:42:57.212489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:30.177 [2024-11-20 17:42:57.212508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:30.177 [2024-11-20 17:42:57.212662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.177 "name": "raid_bdev1", 00:08:30.177 "uuid": "c13c4b22-a6d6-49d6-ba53-086a1605af47", 00:08:30.177 "strip_size_kb": 64, 00:08:30.177 "state": "online", 00:08:30.177 "raid_level": "concat", 00:08:30.177 "superblock": true, 00:08:30.177 "num_base_bdevs": 2, 00:08:30.177 "num_base_bdevs_discovered": 2, 00:08:30.177 "num_base_bdevs_operational": 2, 00:08:30.177 "base_bdevs_list": [ 00:08:30.177 { 00:08:30.177 "name": "BaseBdev1", 00:08:30.177 "uuid": "58f67aba-d2ab-5ec7-9c05-5dcddb429f43", 00:08:30.177 "is_configured": true, 00:08:30.177 "data_offset": 2048, 00:08:30.177 "data_size": 63488 00:08:30.177 }, 00:08:30.177 { 00:08:30.177 "name": "BaseBdev2", 00:08:30.177 "uuid": "b5599bd4-4fea-5285-8d79-cb6ad6564b63", 00:08:30.177 "is_configured": true, 00:08:30.177 "data_offset": 2048, 00:08:30.177 "data_size": 63488 00:08:30.177 } 00:08:30.177 ] 00:08:30.177 }' 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.177 17:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.747 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.747 17:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.747 [2024-11-20 17:42:57.734397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.684 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.685 "name": "raid_bdev1", 00:08:31.685 "uuid": "c13c4b22-a6d6-49d6-ba53-086a1605af47", 00:08:31.685 "strip_size_kb": 64, 00:08:31.685 "state": "online", 00:08:31.685 "raid_level": "concat", 00:08:31.685 "superblock": true, 00:08:31.685 "num_base_bdevs": 2, 00:08:31.685 "num_base_bdevs_discovered": 2, 00:08:31.685 "num_base_bdevs_operational": 2, 00:08:31.685 "base_bdevs_list": [ 00:08:31.685 { 00:08:31.685 "name": "BaseBdev1", 00:08:31.685 "uuid": "58f67aba-d2ab-5ec7-9c05-5dcddb429f43", 00:08:31.685 "is_configured": true, 00:08:31.685 "data_offset": 2048, 00:08:31.685 "data_size": 63488 00:08:31.685 }, 00:08:31.685 { 00:08:31.685 "name": "BaseBdev2", 00:08:31.685 "uuid": "b5599bd4-4fea-5285-8d79-cb6ad6564b63", 00:08:31.685 "is_configured": true, 00:08:31.685 "data_offset": 2048, 00:08:31.685 "data_size": 63488 00:08:31.685 } 00:08:31.685 ] 00:08:31.685 }' 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.685 17:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.253 [2024-11-20 17:42:59.148939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.253 [2024-11-20 17:42:59.149063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.253 [2024-11-20 17:42:59.151802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.253 [2024-11-20 17:42:59.151912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.253 [2024-11-20 17:42:59.151963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.253 [2024-11-20 17:42:59.152020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62946 00:08:32.253 { 00:08:32.253 "results": [ 00:08:32.253 { 00:08:32.253 "job": "raid_bdev1", 00:08:32.253 "core_mask": "0x1", 00:08:32.253 "workload": "randrw", 00:08:32.253 "percentage": 50, 00:08:32.253 "status": "finished", 00:08:32.253 "queue_depth": 1, 00:08:32.253 "io_size": 131072, 00:08:32.253 "runtime": 1.415393, 00:08:32.253 "iops": 15758.167519551107, 00:08:32.253 "mibps": 1969.7709399438884, 00:08:32.253 "io_failed": 1, 00:08:32.253 "io_timeout": 0, 00:08:32.253 "avg_latency_us": 87.90106168061091, 00:08:32.253 "min_latency_us": 26.494323144104804, 00:08:32.253 "max_latency_us": 1466.6899563318777 00:08:32.253 } 00:08:32.253 ], 00:08:32.253 "core_count": 1 00:08:32.253 } 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62946 ']' 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62946 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62946 00:08:32.253 killing process with pid 62946 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62946' 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62946 00:08:32.253 [2024-11-20 17:42:59.198484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.253 17:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62946 00:08:32.253 [2024-11-20 17:42:59.334379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.moPboRXtwF 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:33.635 ************************************ 00:08:33.635 END TEST raid_write_error_test 00:08:33.635 ************************************ 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:33.635 00:08:33.635 real 0m4.571s 00:08:33.635 user 0m5.466s 00:08:33.635 sys 0m0.552s 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.635 17:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.635 17:43:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:33.635 17:43:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:33.635 17:43:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.635 17:43:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.635 17:43:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.635 ************************************ 00:08:33.635 START TEST raid_state_function_test 00:08:33.635 ************************************ 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63084 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63084' 00:08:33.635 Process raid pid: 63084 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63084 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63084 ']' 00:08:33.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.635 17:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.896 [2024-11-20 17:43:00.850406] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:33.896 [2024-11-20 17:43:00.850542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.896 [2024-11-20 17:43:01.003919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.155 [2024-11-20 17:43:01.148413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.415 [2024-11-20 17:43:01.397824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.415 [2024-11-20 17:43:01.397882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.674 [2024-11-20 17:43:01.762133] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.674 [2024-11-20 17:43:01.762222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.674 [2024-11-20 17:43:01.762236] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.674 [2024-11-20 17:43:01.762248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.674 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.675 "name": "Existed_Raid", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "strip_size_kb": 0, 00:08:34.675 "state": "configuring", 00:08:34.675 "raid_level": "raid1", 00:08:34.675 "superblock": false, 00:08:34.675 "num_base_bdevs": 2, 00:08:34.675 "num_base_bdevs_discovered": 0, 00:08:34.675 "num_base_bdevs_operational": 2, 00:08:34.675 "base_bdevs_list": [ 00:08:34.675 { 00:08:34.675 "name": "BaseBdev1", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "is_configured": false, 00:08:34.675 "data_offset": 0, 00:08:34.675 "data_size": 0 00:08:34.675 }, 00:08:34.675 { 00:08:34.675 "name": "BaseBdev2", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "is_configured": false, 00:08:34.675 "data_offset": 0, 00:08:34.675 "data_size": 0 00:08:34.675 } 00:08:34.675 ] 00:08:34.675 }' 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.675 17:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 [2024-11-20 17:43:02.233316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.248 [2024-11-20 17:43:02.233454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 [2024-11-20 17:43:02.245239] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.248 [2024-11-20 17:43:02.245333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.248 [2024-11-20 17:43:02.245361] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.248 [2024-11-20 17:43:02.245390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 [2024-11-20 17:43:02.302150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.248 BaseBdev1 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 [ 00:08:35.248 { 00:08:35.248 "name": "BaseBdev1", 00:08:35.248 "aliases": [ 00:08:35.248 "0514d7f3-5d61-48a6-aae2-376f1547896d" 00:08:35.248 ], 00:08:35.248 "product_name": "Malloc disk", 00:08:35.248 "block_size": 512, 00:08:35.248 "num_blocks": 65536, 00:08:35.248 "uuid": "0514d7f3-5d61-48a6-aae2-376f1547896d", 00:08:35.248 "assigned_rate_limits": { 00:08:35.248 "rw_ios_per_sec": 0, 00:08:35.248 "rw_mbytes_per_sec": 0, 00:08:35.248 "r_mbytes_per_sec": 0, 00:08:35.248 "w_mbytes_per_sec": 0 00:08:35.248 }, 00:08:35.248 "claimed": true, 00:08:35.248 "claim_type": "exclusive_write", 00:08:35.248 "zoned": false, 00:08:35.248 "supported_io_types": { 00:08:35.248 "read": true, 00:08:35.248 "write": true, 00:08:35.248 "unmap": true, 00:08:35.248 "flush": true, 00:08:35.248 "reset": true, 00:08:35.248 "nvme_admin": false, 00:08:35.248 "nvme_io": false, 00:08:35.248 "nvme_io_md": false, 00:08:35.248 "write_zeroes": true, 00:08:35.248 "zcopy": true, 00:08:35.248 "get_zone_info": false, 00:08:35.248 "zone_management": false, 00:08:35.248 "zone_append": false, 00:08:35.248 "compare": false, 00:08:35.248 "compare_and_write": false, 00:08:35.248 "abort": true, 00:08:35.248 "seek_hole": false, 00:08:35.248 "seek_data": false, 00:08:35.248 "copy": true, 00:08:35.248 "nvme_iov_md": false 00:08:35.248 }, 00:08:35.248 "memory_domains": [ 00:08:35.248 { 00:08:35.248 "dma_device_id": "system", 00:08:35.248 "dma_device_type": 1 00:08:35.248 }, 00:08:35.248 { 00:08:35.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.248 "dma_device_type": 2 00:08:35.248 } 00:08:35.248 ], 00:08:35.248 "driver_specific": {} 00:08:35.248 } 00:08:35.248 ] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.248 "name": "Existed_Raid", 00:08:35.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.248 "strip_size_kb": 0, 00:08:35.248 "state": "configuring", 00:08:35.248 "raid_level": "raid1", 00:08:35.248 "superblock": false, 00:08:35.248 "num_base_bdevs": 2, 00:08:35.248 "num_base_bdevs_discovered": 1, 00:08:35.248 "num_base_bdevs_operational": 2, 00:08:35.248 "base_bdevs_list": [ 00:08:35.248 { 00:08:35.248 "name": "BaseBdev1", 00:08:35.248 "uuid": "0514d7f3-5d61-48a6-aae2-376f1547896d", 00:08:35.248 "is_configured": true, 00:08:35.248 "data_offset": 0, 00:08:35.248 "data_size": 65536 00:08:35.248 }, 00:08:35.248 { 00:08:35.248 "name": "BaseBdev2", 00:08:35.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.248 "is_configured": false, 00:08:35.248 "data_offset": 0, 00:08:35.248 "data_size": 0 00:08:35.248 } 00:08:35.248 ] 00:08:35.248 }' 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.248 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.818 [2024-11-20 17:43:02.737605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.818 [2024-11-20 17:43:02.737765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.818 [2024-11-20 17:43:02.749610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.818 [2024-11-20 17:43:02.751606] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.818 [2024-11-20 17:43:02.751669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:35.818 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.819 "name": "Existed_Raid", 00:08:35.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.819 "strip_size_kb": 0, 00:08:35.819 "state": "configuring", 00:08:35.819 "raid_level": "raid1", 00:08:35.819 "superblock": false, 00:08:35.819 "num_base_bdevs": 2, 00:08:35.819 "num_base_bdevs_discovered": 1, 00:08:35.819 "num_base_bdevs_operational": 2, 00:08:35.819 "base_bdevs_list": [ 00:08:35.819 { 00:08:35.819 "name": "BaseBdev1", 00:08:35.819 "uuid": "0514d7f3-5d61-48a6-aae2-376f1547896d", 00:08:35.819 "is_configured": true, 00:08:35.819 "data_offset": 0, 00:08:35.819 "data_size": 65536 00:08:35.819 }, 00:08:35.819 { 00:08:35.819 "name": "BaseBdev2", 00:08:35.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.819 "is_configured": false, 00:08:35.819 "data_offset": 0, 00:08:35.819 "data_size": 0 00:08:35.819 } 00:08:35.819 ] 00:08:35.819 }' 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.819 17:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.079 [2024-11-20 17:43:03.231863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.079 [2024-11-20 17:43:03.231996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.079 [2024-11-20 17:43:03.232023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:36.079 [2024-11-20 17:43:03.232333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:36.079 [2024-11-20 17:43:03.232561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.079 [2024-11-20 17:43:03.232611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:36.079 [2024-11-20 17:43:03.232925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.079 BaseBdev2 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.079 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.080 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.080 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.339 [ 00:08:36.339 { 00:08:36.339 "name": "BaseBdev2", 00:08:36.339 "aliases": [ 00:08:36.339 "d63fa992-ae0d-4559-8f63-d56b2641bfcc" 00:08:36.339 ], 00:08:36.339 "product_name": "Malloc disk", 00:08:36.339 "block_size": 512, 00:08:36.339 "num_blocks": 65536, 00:08:36.339 "uuid": "d63fa992-ae0d-4559-8f63-d56b2641bfcc", 00:08:36.339 "assigned_rate_limits": { 00:08:36.339 "rw_ios_per_sec": 0, 00:08:36.339 "rw_mbytes_per_sec": 0, 00:08:36.339 "r_mbytes_per_sec": 0, 00:08:36.339 "w_mbytes_per_sec": 0 00:08:36.339 }, 00:08:36.339 "claimed": true, 00:08:36.339 "claim_type": "exclusive_write", 00:08:36.339 "zoned": false, 00:08:36.339 "supported_io_types": { 00:08:36.339 "read": true, 00:08:36.339 "write": true, 00:08:36.339 "unmap": true, 00:08:36.339 "flush": true, 00:08:36.339 "reset": true, 00:08:36.339 "nvme_admin": false, 00:08:36.339 "nvme_io": false, 00:08:36.339 "nvme_io_md": false, 00:08:36.339 "write_zeroes": true, 00:08:36.339 "zcopy": true, 00:08:36.339 "get_zone_info": false, 00:08:36.339 "zone_management": false, 00:08:36.339 "zone_append": false, 00:08:36.339 "compare": false, 00:08:36.339 "compare_and_write": false, 00:08:36.339 "abort": true, 00:08:36.339 "seek_hole": false, 00:08:36.339 "seek_data": false, 00:08:36.339 "copy": true, 00:08:36.339 "nvme_iov_md": false 00:08:36.339 }, 00:08:36.339 "memory_domains": [ 00:08:36.339 { 00:08:36.339 "dma_device_id": "system", 00:08:36.339 "dma_device_type": 1 00:08:36.339 }, 00:08:36.339 { 00:08:36.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.339 "dma_device_type": 2 00:08:36.339 } 00:08:36.339 ], 00:08:36.339 "driver_specific": {} 00:08:36.339 } 00:08:36.339 ] 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.339 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.339 "name": "Existed_Raid", 00:08:36.339 "uuid": "ffaf9d6c-022c-48b6-b3d6-0c1c08fe1fa1", 00:08:36.339 "strip_size_kb": 0, 00:08:36.339 "state": "online", 00:08:36.339 "raid_level": "raid1", 00:08:36.339 "superblock": false, 00:08:36.339 "num_base_bdevs": 2, 00:08:36.339 "num_base_bdevs_discovered": 2, 00:08:36.339 "num_base_bdevs_operational": 2, 00:08:36.339 "base_bdevs_list": [ 00:08:36.339 { 00:08:36.339 "name": "BaseBdev1", 00:08:36.339 "uuid": "0514d7f3-5d61-48a6-aae2-376f1547896d", 00:08:36.339 "is_configured": true, 00:08:36.339 "data_offset": 0, 00:08:36.339 "data_size": 65536 00:08:36.339 }, 00:08:36.339 { 00:08:36.340 "name": "BaseBdev2", 00:08:36.340 "uuid": "d63fa992-ae0d-4559-8f63-d56b2641bfcc", 00:08:36.340 "is_configured": true, 00:08:36.340 "data_offset": 0, 00:08:36.340 "data_size": 65536 00:08:36.340 } 00:08:36.340 ] 00:08:36.340 }' 00:08:36.340 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.340 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 [2024-11-20 17:43:03.715482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.599 "name": "Existed_Raid", 00:08:36.599 "aliases": [ 00:08:36.599 "ffaf9d6c-022c-48b6-b3d6-0c1c08fe1fa1" 00:08:36.599 ], 00:08:36.599 "product_name": "Raid Volume", 00:08:36.599 "block_size": 512, 00:08:36.599 "num_blocks": 65536, 00:08:36.599 "uuid": "ffaf9d6c-022c-48b6-b3d6-0c1c08fe1fa1", 00:08:36.599 "assigned_rate_limits": { 00:08:36.599 "rw_ios_per_sec": 0, 00:08:36.599 "rw_mbytes_per_sec": 0, 00:08:36.599 "r_mbytes_per_sec": 0, 00:08:36.599 "w_mbytes_per_sec": 0 00:08:36.599 }, 00:08:36.599 "claimed": false, 00:08:36.599 "zoned": false, 00:08:36.599 "supported_io_types": { 00:08:36.599 "read": true, 00:08:36.599 "write": true, 00:08:36.599 "unmap": false, 00:08:36.599 "flush": false, 00:08:36.599 "reset": true, 00:08:36.599 "nvme_admin": false, 00:08:36.599 "nvme_io": false, 00:08:36.599 "nvme_io_md": false, 00:08:36.599 "write_zeroes": true, 00:08:36.599 "zcopy": false, 00:08:36.599 "get_zone_info": false, 00:08:36.599 "zone_management": false, 00:08:36.599 "zone_append": false, 00:08:36.599 "compare": false, 00:08:36.599 "compare_and_write": false, 00:08:36.599 "abort": false, 00:08:36.599 "seek_hole": false, 00:08:36.599 "seek_data": false, 00:08:36.599 "copy": false, 00:08:36.599 "nvme_iov_md": false 00:08:36.599 }, 00:08:36.599 "memory_domains": [ 00:08:36.599 { 00:08:36.599 "dma_device_id": "system", 00:08:36.599 "dma_device_type": 1 00:08:36.599 }, 00:08:36.599 { 00:08:36.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.599 "dma_device_type": 2 00:08:36.599 }, 00:08:36.599 { 00:08:36.599 "dma_device_id": "system", 00:08:36.599 "dma_device_type": 1 00:08:36.599 }, 00:08:36.599 { 00:08:36.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.599 "dma_device_type": 2 00:08:36.599 } 00:08:36.599 ], 00:08:36.599 "driver_specific": { 00:08:36.599 "raid": { 00:08:36.599 "uuid": "ffaf9d6c-022c-48b6-b3d6-0c1c08fe1fa1", 00:08:36.599 "strip_size_kb": 0, 00:08:36.599 "state": "online", 00:08:36.599 "raid_level": "raid1", 00:08:36.599 "superblock": false, 00:08:36.599 "num_base_bdevs": 2, 00:08:36.599 "num_base_bdevs_discovered": 2, 00:08:36.599 "num_base_bdevs_operational": 2, 00:08:36.599 "base_bdevs_list": [ 00:08:36.599 { 00:08:36.599 "name": "BaseBdev1", 00:08:36.599 "uuid": "0514d7f3-5d61-48a6-aae2-376f1547896d", 00:08:36.599 "is_configured": true, 00:08:36.599 "data_offset": 0, 00:08:36.599 "data_size": 65536 00:08:36.599 }, 00:08:36.599 { 00:08:36.599 "name": "BaseBdev2", 00:08:36.599 "uuid": "d63fa992-ae0d-4559-8f63-d56b2641bfcc", 00:08:36.599 "is_configured": true, 00:08:36.599 "data_offset": 0, 00:08:36.599 "data_size": 65536 00:08:36.599 } 00:08:36.599 ] 00:08:36.599 } 00:08:36.599 } 00:08:36.599 }' 00:08:36.599 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:36.863 BaseBdev2' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.863 17:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 [2024-11-20 17:43:03.946835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.193 "name": "Existed_Raid", 00:08:37.193 "uuid": "ffaf9d6c-022c-48b6-b3d6-0c1c08fe1fa1", 00:08:37.193 "strip_size_kb": 0, 00:08:37.193 "state": "online", 00:08:37.193 "raid_level": "raid1", 00:08:37.193 "superblock": false, 00:08:37.193 "num_base_bdevs": 2, 00:08:37.193 "num_base_bdevs_discovered": 1, 00:08:37.193 "num_base_bdevs_operational": 1, 00:08:37.193 "base_bdevs_list": [ 00:08:37.193 { 00:08:37.193 "name": null, 00:08:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.193 "is_configured": false, 00:08:37.193 "data_offset": 0, 00:08:37.193 "data_size": 65536 00:08:37.193 }, 00:08:37.193 { 00:08:37.193 "name": "BaseBdev2", 00:08:37.193 "uuid": "d63fa992-ae0d-4559-8f63-d56b2641bfcc", 00:08:37.193 "is_configured": true, 00:08:37.193 "data_offset": 0, 00:08:37.193 "data_size": 65536 00:08:37.193 } 00:08:37.193 ] 00:08:37.193 }' 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.193 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.453 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.453 [2024-11-20 17:43:04.554180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.453 [2024-11-20 17:43:04.554279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.714 [2024-11-20 17:43:04.656545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.714 [2024-11-20 17:43:04.656606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.714 [2024-11-20 17:43:04.656619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63084 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63084 ']' 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63084 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:37.714 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.715 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63084 00:08:37.715 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.715 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.715 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63084' 00:08:37.715 killing process with pid 63084 00:08:37.715 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63084 00:08:37.715 [2024-11-20 17:43:04.749789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.715 17:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63084 00:08:37.715 [2024-11-20 17:43:04.767386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:39.096 00:08:39.096 real 0m5.166s 00:08:39.096 user 0m7.362s 00:08:39.096 sys 0m0.923s 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.096 ************************************ 00:08:39.096 END TEST raid_state_function_test 00:08:39.096 ************************************ 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 17:43:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:39.096 17:43:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.096 17:43:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.096 17:43:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 ************************************ 00:08:39.096 START TEST raid_state_function_test_sb 00:08:39.096 ************************************ 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63337 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63337' 00:08:39.096 Process raid pid: 63337 00:08:39.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63337 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63337 ']' 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.096 17:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 [2024-11-20 17:43:06.081516] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:39.096 [2024-11-20 17:43:06.081731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.096 [2024-11-20 17:43:06.257464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.355 [2024-11-20 17:43:06.378594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.616 [2024-11-20 17:43:06.590644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.616 [2024-11-20 17:43:06.590785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.875 [2024-11-20 17:43:06.932988] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.875 [2024-11-20 17:43:06.933125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.875 [2024-11-20 17:43:06.933142] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.875 [2024-11-20 17:43:06.933153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.875 "name": "Existed_Raid", 00:08:39.875 "uuid": "1b561aff-a486-4569-818e-5aab2144708c", 00:08:39.875 "strip_size_kb": 0, 00:08:39.875 "state": "configuring", 00:08:39.875 "raid_level": "raid1", 00:08:39.875 "superblock": true, 00:08:39.875 "num_base_bdevs": 2, 00:08:39.875 "num_base_bdevs_discovered": 0, 00:08:39.875 "num_base_bdevs_operational": 2, 00:08:39.875 "base_bdevs_list": [ 00:08:39.875 { 00:08:39.875 "name": "BaseBdev1", 00:08:39.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.875 "is_configured": false, 00:08:39.875 "data_offset": 0, 00:08:39.875 "data_size": 0 00:08:39.875 }, 00:08:39.875 { 00:08:39.875 "name": "BaseBdev2", 00:08:39.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.875 "is_configured": false, 00:08:39.875 "data_offset": 0, 00:08:39.875 "data_size": 0 00:08:39.875 } 00:08:39.875 ] 00:08:39.875 }' 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.875 17:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.443 [2024-11-20 17:43:07.364188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.443 [2024-11-20 17:43:07.364269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.443 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.443 [2024-11-20 17:43:07.376170] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.443 [2024-11-20 17:43:07.376249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.444 [2024-11-20 17:43:07.376279] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.444 [2024-11-20 17:43:07.376306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 [2024-11-20 17:43:07.424468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.444 BaseBdev1 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 [ 00:08:40.444 { 00:08:40.444 "name": "BaseBdev1", 00:08:40.444 "aliases": [ 00:08:40.444 "1797da80-be3d-45a0-ace2-94aedb9ef9ee" 00:08:40.444 ], 00:08:40.444 "product_name": "Malloc disk", 00:08:40.444 "block_size": 512, 00:08:40.444 "num_blocks": 65536, 00:08:40.444 "uuid": "1797da80-be3d-45a0-ace2-94aedb9ef9ee", 00:08:40.444 "assigned_rate_limits": { 00:08:40.444 "rw_ios_per_sec": 0, 00:08:40.444 "rw_mbytes_per_sec": 0, 00:08:40.444 "r_mbytes_per_sec": 0, 00:08:40.444 "w_mbytes_per_sec": 0 00:08:40.444 }, 00:08:40.444 "claimed": true, 00:08:40.444 "claim_type": "exclusive_write", 00:08:40.444 "zoned": false, 00:08:40.444 "supported_io_types": { 00:08:40.444 "read": true, 00:08:40.444 "write": true, 00:08:40.444 "unmap": true, 00:08:40.444 "flush": true, 00:08:40.444 "reset": true, 00:08:40.444 "nvme_admin": false, 00:08:40.444 "nvme_io": false, 00:08:40.444 "nvme_io_md": false, 00:08:40.444 "write_zeroes": true, 00:08:40.444 "zcopy": true, 00:08:40.444 "get_zone_info": false, 00:08:40.444 "zone_management": false, 00:08:40.444 "zone_append": false, 00:08:40.444 "compare": false, 00:08:40.444 "compare_and_write": false, 00:08:40.444 "abort": true, 00:08:40.444 "seek_hole": false, 00:08:40.444 "seek_data": false, 00:08:40.444 "copy": true, 00:08:40.444 "nvme_iov_md": false 00:08:40.444 }, 00:08:40.444 "memory_domains": [ 00:08:40.444 { 00:08:40.444 "dma_device_id": "system", 00:08:40.444 "dma_device_type": 1 00:08:40.444 }, 00:08:40.444 { 00:08:40.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.444 "dma_device_type": 2 00:08:40.444 } 00:08:40.444 ], 00:08:40.444 "driver_specific": {} 00:08:40.444 } 00:08:40.444 ] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.444 "name": "Existed_Raid", 00:08:40.444 "uuid": "cdbe0f8f-3880-43fe-a61b-abf829ede015", 00:08:40.444 "strip_size_kb": 0, 00:08:40.444 "state": "configuring", 00:08:40.444 "raid_level": "raid1", 00:08:40.444 "superblock": true, 00:08:40.444 "num_base_bdevs": 2, 00:08:40.444 "num_base_bdevs_discovered": 1, 00:08:40.444 "num_base_bdevs_operational": 2, 00:08:40.444 "base_bdevs_list": [ 00:08:40.444 { 00:08:40.444 "name": "BaseBdev1", 00:08:40.444 "uuid": "1797da80-be3d-45a0-ace2-94aedb9ef9ee", 00:08:40.444 "is_configured": true, 00:08:40.444 "data_offset": 2048, 00:08:40.444 "data_size": 63488 00:08:40.444 }, 00:08:40.444 { 00:08:40.444 "name": "BaseBdev2", 00:08:40.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.444 "is_configured": false, 00:08:40.444 "data_offset": 0, 00:08:40.444 "data_size": 0 00:08:40.444 } 00:08:40.444 ] 00:08:40.444 }' 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.444 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.703 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.703 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.703 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.967 [2024-11-20 17:43:07.879745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.967 [2024-11-20 17:43:07.879807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.967 [2024-11-20 17:43:07.891745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.967 [2024-11-20 17:43:07.893708] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.967 [2024-11-20 17:43:07.893753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.967 "name": "Existed_Raid", 00:08:40.967 "uuid": "d607b6c9-f447-4fc5-ac8c-49e0bafec98f", 00:08:40.967 "strip_size_kb": 0, 00:08:40.967 "state": "configuring", 00:08:40.967 "raid_level": "raid1", 00:08:40.967 "superblock": true, 00:08:40.967 "num_base_bdevs": 2, 00:08:40.967 "num_base_bdevs_discovered": 1, 00:08:40.967 "num_base_bdevs_operational": 2, 00:08:40.967 "base_bdevs_list": [ 00:08:40.967 { 00:08:40.967 "name": "BaseBdev1", 00:08:40.967 "uuid": "1797da80-be3d-45a0-ace2-94aedb9ef9ee", 00:08:40.967 "is_configured": true, 00:08:40.967 "data_offset": 2048, 00:08:40.967 "data_size": 63488 00:08:40.967 }, 00:08:40.967 { 00:08:40.967 "name": "BaseBdev2", 00:08:40.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.967 "is_configured": false, 00:08:40.967 "data_offset": 0, 00:08:40.967 "data_size": 0 00:08:40.967 } 00:08:40.967 ] 00:08:40.967 }' 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.967 17:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.227 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.227 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.227 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 [2024-11-20 17:43:08.421105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.487 [2024-11-20 17:43:08.421361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.487 [2024-11-20 17:43:08.421383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.487 [2024-11-20 17:43:08.421664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.487 [2024-11-20 17:43:08.421830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.487 [2024-11-20 17:43:08.421845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.487 BaseBdev2 00:08:41.487 [2024-11-20 17:43:08.421981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 [ 00:08:41.487 { 00:08:41.487 "name": "BaseBdev2", 00:08:41.487 "aliases": [ 00:08:41.487 "b3cdf4ce-0502-48db-b35d-ad8db1a08b46" 00:08:41.487 ], 00:08:41.487 "product_name": "Malloc disk", 00:08:41.487 "block_size": 512, 00:08:41.487 "num_blocks": 65536, 00:08:41.487 "uuid": "b3cdf4ce-0502-48db-b35d-ad8db1a08b46", 00:08:41.487 "assigned_rate_limits": { 00:08:41.487 "rw_ios_per_sec": 0, 00:08:41.487 "rw_mbytes_per_sec": 0, 00:08:41.487 "r_mbytes_per_sec": 0, 00:08:41.487 "w_mbytes_per_sec": 0 00:08:41.487 }, 00:08:41.487 "claimed": true, 00:08:41.487 "claim_type": "exclusive_write", 00:08:41.487 "zoned": false, 00:08:41.487 "supported_io_types": { 00:08:41.487 "read": true, 00:08:41.487 "write": true, 00:08:41.487 "unmap": true, 00:08:41.487 "flush": true, 00:08:41.487 "reset": true, 00:08:41.487 "nvme_admin": false, 00:08:41.487 "nvme_io": false, 00:08:41.487 "nvme_io_md": false, 00:08:41.487 "write_zeroes": true, 00:08:41.487 "zcopy": true, 00:08:41.487 "get_zone_info": false, 00:08:41.487 "zone_management": false, 00:08:41.487 "zone_append": false, 00:08:41.487 "compare": false, 00:08:41.487 "compare_and_write": false, 00:08:41.487 "abort": true, 00:08:41.487 "seek_hole": false, 00:08:41.487 "seek_data": false, 00:08:41.487 "copy": true, 00:08:41.487 "nvme_iov_md": false 00:08:41.487 }, 00:08:41.487 "memory_domains": [ 00:08:41.487 { 00:08:41.487 "dma_device_id": "system", 00:08:41.487 "dma_device_type": 1 00:08:41.487 }, 00:08:41.487 { 00:08:41.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.487 "dma_device_type": 2 00:08:41.487 } 00:08:41.487 ], 00:08:41.487 "driver_specific": {} 00:08:41.487 } 00:08:41.487 ] 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.487 "name": "Existed_Raid", 00:08:41.487 "uuid": "d607b6c9-f447-4fc5-ac8c-49e0bafec98f", 00:08:41.487 "strip_size_kb": 0, 00:08:41.487 "state": "online", 00:08:41.487 "raid_level": "raid1", 00:08:41.487 "superblock": true, 00:08:41.487 "num_base_bdevs": 2, 00:08:41.487 "num_base_bdevs_discovered": 2, 00:08:41.487 "num_base_bdevs_operational": 2, 00:08:41.487 "base_bdevs_list": [ 00:08:41.487 { 00:08:41.487 "name": "BaseBdev1", 00:08:41.487 "uuid": "1797da80-be3d-45a0-ace2-94aedb9ef9ee", 00:08:41.487 "is_configured": true, 00:08:41.487 "data_offset": 2048, 00:08:41.487 "data_size": 63488 00:08:41.487 }, 00:08:41.487 { 00:08:41.487 "name": "BaseBdev2", 00:08:41.487 "uuid": "b3cdf4ce-0502-48db-b35d-ad8db1a08b46", 00:08:41.487 "is_configured": true, 00:08:41.487 "data_offset": 2048, 00:08:41.487 "data_size": 63488 00:08:41.487 } 00:08:41.487 ] 00:08:41.487 }' 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.487 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.747 [2024-11-20 17:43:08.892837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.747 17:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.008 "name": "Existed_Raid", 00:08:42.008 "aliases": [ 00:08:42.008 "d607b6c9-f447-4fc5-ac8c-49e0bafec98f" 00:08:42.008 ], 00:08:42.008 "product_name": "Raid Volume", 00:08:42.008 "block_size": 512, 00:08:42.008 "num_blocks": 63488, 00:08:42.008 "uuid": "d607b6c9-f447-4fc5-ac8c-49e0bafec98f", 00:08:42.008 "assigned_rate_limits": { 00:08:42.008 "rw_ios_per_sec": 0, 00:08:42.008 "rw_mbytes_per_sec": 0, 00:08:42.008 "r_mbytes_per_sec": 0, 00:08:42.008 "w_mbytes_per_sec": 0 00:08:42.008 }, 00:08:42.008 "claimed": false, 00:08:42.008 "zoned": false, 00:08:42.008 "supported_io_types": { 00:08:42.008 "read": true, 00:08:42.008 "write": true, 00:08:42.008 "unmap": false, 00:08:42.008 "flush": false, 00:08:42.008 "reset": true, 00:08:42.008 "nvme_admin": false, 00:08:42.008 "nvme_io": false, 00:08:42.008 "nvme_io_md": false, 00:08:42.008 "write_zeroes": true, 00:08:42.008 "zcopy": false, 00:08:42.008 "get_zone_info": false, 00:08:42.008 "zone_management": false, 00:08:42.008 "zone_append": false, 00:08:42.008 "compare": false, 00:08:42.008 "compare_and_write": false, 00:08:42.008 "abort": false, 00:08:42.008 "seek_hole": false, 00:08:42.008 "seek_data": false, 00:08:42.008 "copy": false, 00:08:42.008 "nvme_iov_md": false 00:08:42.008 }, 00:08:42.008 "memory_domains": [ 00:08:42.008 { 00:08:42.008 "dma_device_id": "system", 00:08:42.008 "dma_device_type": 1 00:08:42.008 }, 00:08:42.008 { 00:08:42.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.008 "dma_device_type": 2 00:08:42.008 }, 00:08:42.008 { 00:08:42.008 "dma_device_id": "system", 00:08:42.008 "dma_device_type": 1 00:08:42.008 }, 00:08:42.008 { 00:08:42.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.008 "dma_device_type": 2 00:08:42.008 } 00:08:42.008 ], 00:08:42.008 "driver_specific": { 00:08:42.008 "raid": { 00:08:42.008 "uuid": "d607b6c9-f447-4fc5-ac8c-49e0bafec98f", 00:08:42.008 "strip_size_kb": 0, 00:08:42.008 "state": "online", 00:08:42.008 "raid_level": "raid1", 00:08:42.008 "superblock": true, 00:08:42.008 "num_base_bdevs": 2, 00:08:42.008 "num_base_bdevs_discovered": 2, 00:08:42.008 "num_base_bdevs_operational": 2, 00:08:42.008 "base_bdevs_list": [ 00:08:42.008 { 00:08:42.008 "name": "BaseBdev1", 00:08:42.008 "uuid": "1797da80-be3d-45a0-ace2-94aedb9ef9ee", 00:08:42.008 "is_configured": true, 00:08:42.008 "data_offset": 2048, 00:08:42.008 "data_size": 63488 00:08:42.008 }, 00:08:42.008 { 00:08:42.008 "name": "BaseBdev2", 00:08:42.008 "uuid": "b3cdf4ce-0502-48db-b35d-ad8db1a08b46", 00:08:42.008 "is_configured": true, 00:08:42.008 "data_offset": 2048, 00:08:42.008 "data_size": 63488 00:08:42.008 } 00:08:42.008 ] 00:08:42.008 } 00:08:42.008 } 00:08:42.008 }' 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.008 BaseBdev2' 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.008 17:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 [2024-11-20 17:43:09.064257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.008 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.274 "name": "Existed_Raid", 00:08:42.274 "uuid": "d607b6c9-f447-4fc5-ac8c-49e0bafec98f", 00:08:42.274 "strip_size_kb": 0, 00:08:42.274 "state": "online", 00:08:42.274 "raid_level": "raid1", 00:08:42.274 "superblock": true, 00:08:42.274 "num_base_bdevs": 2, 00:08:42.274 "num_base_bdevs_discovered": 1, 00:08:42.274 "num_base_bdevs_operational": 1, 00:08:42.274 "base_bdevs_list": [ 00:08:42.274 { 00:08:42.274 "name": null, 00:08:42.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.274 "is_configured": false, 00:08:42.274 "data_offset": 0, 00:08:42.274 "data_size": 63488 00:08:42.274 }, 00:08:42.274 { 00:08:42.274 "name": "BaseBdev2", 00:08:42.274 "uuid": "b3cdf4ce-0502-48db-b35d-ad8db1a08b46", 00:08:42.274 "is_configured": true, 00:08:42.274 "data_offset": 2048, 00:08:42.274 "data_size": 63488 00:08:42.274 } 00:08:42.274 ] 00:08:42.274 }' 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.274 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.534 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.534 [2024-11-20 17:43:09.627453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.534 [2024-11-20 17:43:09.627598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.793 [2024-11-20 17:43:09.737437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.793 [2024-11-20 17:43:09.737521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.793 [2024-11-20 17:43:09.737536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63337 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63337 ']' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63337 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63337 00:08:42.793 killing process with pid 63337 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63337' 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63337 00:08:42.793 [2024-11-20 17:43:09.828645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.793 17:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63337 00:08:42.793 [2024-11-20 17:43:09.846727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.172 17:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.172 00:08:44.172 real 0m5.103s 00:08:44.172 user 0m7.257s 00:08:44.172 sys 0m0.804s 00:08:44.172 ************************************ 00:08:44.172 END TEST raid_state_function_test_sb 00:08:44.172 ************************************ 00:08:44.172 17:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.172 17:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.172 17:43:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:44.172 17:43:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:44.172 17:43:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.172 17:43:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.172 ************************************ 00:08:44.172 START TEST raid_superblock_test 00:08:44.172 ************************************ 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:44.172 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63589 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63589 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63589 ']' 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.173 17:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.173 [2024-11-20 17:43:11.272000] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:44.173 [2024-11-20 17:43:11.272256] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63589 ] 00:08:44.432 [2024-11-20 17:43:11.451734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.432 [2024-11-20 17:43:11.593456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.693 [2024-11-20 17:43:11.848898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.693 [2024-11-20 17:43:11.849124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.994 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.253 malloc1 00:08:45.253 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.253 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:45.253 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.253 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.253 [2024-11-20 17:43:12.214081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:45.253 [2024-11-20 17:43:12.214161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.254 [2024-11-20 17:43:12.214188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:45.254 [2024-11-20 17:43:12.214199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.254 [2024-11-20 17:43:12.216733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.254 [2024-11-20 17:43:12.216776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:45.254 pt1 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.254 malloc2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.254 [2024-11-20 17:43:12.280136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.254 [2024-11-20 17:43:12.280302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.254 [2024-11-20 17:43:12.280354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:45.254 [2024-11-20 17:43:12.280401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.254 [2024-11-20 17:43:12.283214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.254 [2024-11-20 17:43:12.283312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.254 pt2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.254 [2024-11-20 17:43:12.292249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:45.254 [2024-11-20 17:43:12.294499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.254 [2024-11-20 17:43:12.294726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:45.254 [2024-11-20 17:43:12.294779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.254 [2024-11-20 17:43:12.295099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:45.254 [2024-11-20 17:43:12.295321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:45.254 [2024-11-20 17:43:12.295370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:45.254 [2024-11-20 17:43:12.295578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.254 "name": "raid_bdev1", 00:08:45.254 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:45.254 "strip_size_kb": 0, 00:08:45.254 "state": "online", 00:08:45.254 "raid_level": "raid1", 00:08:45.254 "superblock": true, 00:08:45.254 "num_base_bdevs": 2, 00:08:45.254 "num_base_bdevs_discovered": 2, 00:08:45.254 "num_base_bdevs_operational": 2, 00:08:45.254 "base_bdevs_list": [ 00:08:45.254 { 00:08:45.254 "name": "pt1", 00:08:45.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.254 "is_configured": true, 00:08:45.254 "data_offset": 2048, 00:08:45.254 "data_size": 63488 00:08:45.254 }, 00:08:45.254 { 00:08:45.254 "name": "pt2", 00:08:45.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.254 "is_configured": true, 00:08:45.254 "data_offset": 2048, 00:08:45.254 "data_size": 63488 00:08:45.254 } 00:08:45.254 ] 00:08:45.254 }' 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.254 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 [2024-11-20 17:43:12.707886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.825 "name": "raid_bdev1", 00:08:45.825 "aliases": [ 00:08:45.825 "14eb1603-a4cc-404b-9437-463b1d9b2c80" 00:08:45.825 ], 00:08:45.825 "product_name": "Raid Volume", 00:08:45.825 "block_size": 512, 00:08:45.825 "num_blocks": 63488, 00:08:45.825 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:45.825 "assigned_rate_limits": { 00:08:45.825 "rw_ios_per_sec": 0, 00:08:45.825 "rw_mbytes_per_sec": 0, 00:08:45.825 "r_mbytes_per_sec": 0, 00:08:45.825 "w_mbytes_per_sec": 0 00:08:45.825 }, 00:08:45.825 "claimed": false, 00:08:45.825 "zoned": false, 00:08:45.825 "supported_io_types": { 00:08:45.825 "read": true, 00:08:45.825 "write": true, 00:08:45.825 "unmap": false, 00:08:45.825 "flush": false, 00:08:45.825 "reset": true, 00:08:45.825 "nvme_admin": false, 00:08:45.825 "nvme_io": false, 00:08:45.825 "nvme_io_md": false, 00:08:45.825 "write_zeroes": true, 00:08:45.825 "zcopy": false, 00:08:45.825 "get_zone_info": false, 00:08:45.825 "zone_management": false, 00:08:45.825 "zone_append": false, 00:08:45.825 "compare": false, 00:08:45.825 "compare_and_write": false, 00:08:45.825 "abort": false, 00:08:45.825 "seek_hole": false, 00:08:45.825 "seek_data": false, 00:08:45.825 "copy": false, 00:08:45.825 "nvme_iov_md": false 00:08:45.825 }, 00:08:45.825 "memory_domains": [ 00:08:45.825 { 00:08:45.825 "dma_device_id": "system", 00:08:45.825 "dma_device_type": 1 00:08:45.825 }, 00:08:45.825 { 00:08:45.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.825 "dma_device_type": 2 00:08:45.825 }, 00:08:45.825 { 00:08:45.825 "dma_device_id": "system", 00:08:45.825 "dma_device_type": 1 00:08:45.825 }, 00:08:45.825 { 00:08:45.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.825 "dma_device_type": 2 00:08:45.825 } 00:08:45.825 ], 00:08:45.825 "driver_specific": { 00:08:45.825 "raid": { 00:08:45.825 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:45.825 "strip_size_kb": 0, 00:08:45.825 "state": "online", 00:08:45.825 "raid_level": "raid1", 00:08:45.825 "superblock": true, 00:08:45.825 "num_base_bdevs": 2, 00:08:45.825 "num_base_bdevs_discovered": 2, 00:08:45.825 "num_base_bdevs_operational": 2, 00:08:45.825 "base_bdevs_list": [ 00:08:45.825 { 00:08:45.825 "name": "pt1", 00:08:45.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.825 "is_configured": true, 00:08:45.825 "data_offset": 2048, 00:08:45.825 "data_size": 63488 00:08:45.825 }, 00:08:45.825 { 00:08:45.825 "name": "pt2", 00:08:45.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.825 "is_configured": true, 00:08:45.825 "data_offset": 2048, 00:08:45.825 "data_size": 63488 00:08:45.825 } 00:08:45.825 ] 00:08:45.825 } 00:08:45.825 } 00:08:45.825 }' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.825 pt2' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 [2024-11-20 17:43:12.951546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=14eb1603-a4cc-404b-9437-463b1d9b2c80 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 14eb1603-a4cc-404b-9437-463b1d9b2c80 ']' 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.825 17:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 [2024-11-20 17:43:12.999061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.086 [2024-11-20 17:43:12.999205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.086 [2024-11-20 17:43:12.999349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.086 [2024-11-20 17:43:12.999428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.086 [2024-11-20 17:43:12.999448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 [2024-11-20 17:43:13.122838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:46.086 [2024-11-20 17:43:13.125152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:46.086 [2024-11-20 17:43:13.125223] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:46.086 [2024-11-20 17:43:13.125286] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:46.086 [2024-11-20 17:43:13.125307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.086 [2024-11-20 17:43:13.125318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:46.086 request: 00:08:46.086 { 00:08:46.086 "name": "raid_bdev1", 00:08:46.086 "raid_level": "raid1", 00:08:46.086 "base_bdevs": [ 00:08:46.086 "malloc1", 00:08:46.086 "malloc2" 00:08:46.086 ], 00:08:46.086 "superblock": false, 00:08:46.086 "method": "bdev_raid_create", 00:08:46.086 "req_id": 1 00:08:46.086 } 00:08:46.086 Got JSON-RPC error response 00:08:46.086 response: 00:08:46.086 { 00:08:46.086 "code": -17, 00:08:46.086 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:46.086 } 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.086 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.087 [2024-11-20 17:43:13.182693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.087 [2024-11-20 17:43:13.182835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.087 [2024-11-20 17:43:13.182876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:46.087 [2024-11-20 17:43:13.182909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.087 [2024-11-20 17:43:13.185645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.087 [2024-11-20 17:43:13.185727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.087 [2024-11-20 17:43:13.185839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:46.087 [2024-11-20 17:43:13.185923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.087 pt1 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.087 "name": "raid_bdev1", 00:08:46.087 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:46.087 "strip_size_kb": 0, 00:08:46.087 "state": "configuring", 00:08:46.087 "raid_level": "raid1", 00:08:46.087 "superblock": true, 00:08:46.087 "num_base_bdevs": 2, 00:08:46.087 "num_base_bdevs_discovered": 1, 00:08:46.087 "num_base_bdevs_operational": 2, 00:08:46.087 "base_bdevs_list": [ 00:08:46.087 { 00:08:46.087 "name": "pt1", 00:08:46.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.087 "is_configured": true, 00:08:46.087 "data_offset": 2048, 00:08:46.087 "data_size": 63488 00:08:46.087 }, 00:08:46.087 { 00:08:46.087 "name": null, 00:08:46.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.087 "is_configured": false, 00:08:46.087 "data_offset": 2048, 00:08:46.087 "data_size": 63488 00:08:46.087 } 00:08:46.087 ] 00:08:46.087 }' 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.087 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.657 [2024-11-20 17:43:13.598104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.657 [2024-11-20 17:43:13.598210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.657 [2024-11-20 17:43:13.598238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:46.657 [2024-11-20 17:43:13.598251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.657 [2024-11-20 17:43:13.598848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.657 [2024-11-20 17:43:13.598881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.657 [2024-11-20 17:43:13.598985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:46.657 [2024-11-20 17:43:13.599019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.657 [2024-11-20 17:43:13.599175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:46.657 [2024-11-20 17:43:13.599188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.657 [2024-11-20 17:43:13.599471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:46.657 [2024-11-20 17:43:13.599643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:46.657 [2024-11-20 17:43:13.599659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:46.657 [2024-11-20 17:43:13.599849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.657 pt2 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.657 "name": "raid_bdev1", 00:08:46.657 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:46.657 "strip_size_kb": 0, 00:08:46.657 "state": "online", 00:08:46.657 "raid_level": "raid1", 00:08:46.657 "superblock": true, 00:08:46.657 "num_base_bdevs": 2, 00:08:46.657 "num_base_bdevs_discovered": 2, 00:08:46.657 "num_base_bdevs_operational": 2, 00:08:46.657 "base_bdevs_list": [ 00:08:46.657 { 00:08:46.657 "name": "pt1", 00:08:46.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.657 "is_configured": true, 00:08:46.657 "data_offset": 2048, 00:08:46.657 "data_size": 63488 00:08:46.657 }, 00:08:46.657 { 00:08:46.657 "name": "pt2", 00:08:46.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.657 "is_configured": true, 00:08:46.657 "data_offset": 2048, 00:08:46.657 "data_size": 63488 00:08:46.657 } 00:08:46.657 ] 00:08:46.657 }' 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.657 17:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.918 [2024-11-20 17:43:14.037621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.918 "name": "raid_bdev1", 00:08:46.918 "aliases": [ 00:08:46.918 "14eb1603-a4cc-404b-9437-463b1d9b2c80" 00:08:46.918 ], 00:08:46.918 "product_name": "Raid Volume", 00:08:46.918 "block_size": 512, 00:08:46.918 "num_blocks": 63488, 00:08:46.918 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:46.918 "assigned_rate_limits": { 00:08:46.918 "rw_ios_per_sec": 0, 00:08:46.918 "rw_mbytes_per_sec": 0, 00:08:46.918 "r_mbytes_per_sec": 0, 00:08:46.918 "w_mbytes_per_sec": 0 00:08:46.918 }, 00:08:46.918 "claimed": false, 00:08:46.918 "zoned": false, 00:08:46.918 "supported_io_types": { 00:08:46.918 "read": true, 00:08:46.918 "write": true, 00:08:46.918 "unmap": false, 00:08:46.918 "flush": false, 00:08:46.918 "reset": true, 00:08:46.918 "nvme_admin": false, 00:08:46.918 "nvme_io": false, 00:08:46.918 "nvme_io_md": false, 00:08:46.918 "write_zeroes": true, 00:08:46.918 "zcopy": false, 00:08:46.918 "get_zone_info": false, 00:08:46.918 "zone_management": false, 00:08:46.918 "zone_append": false, 00:08:46.918 "compare": false, 00:08:46.918 "compare_and_write": false, 00:08:46.918 "abort": false, 00:08:46.918 "seek_hole": false, 00:08:46.918 "seek_data": false, 00:08:46.918 "copy": false, 00:08:46.918 "nvme_iov_md": false 00:08:46.918 }, 00:08:46.918 "memory_domains": [ 00:08:46.918 { 00:08:46.918 "dma_device_id": "system", 00:08:46.918 "dma_device_type": 1 00:08:46.918 }, 00:08:46.918 { 00:08:46.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.918 "dma_device_type": 2 00:08:46.918 }, 00:08:46.918 { 00:08:46.918 "dma_device_id": "system", 00:08:46.918 "dma_device_type": 1 00:08:46.918 }, 00:08:46.918 { 00:08:46.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.918 "dma_device_type": 2 00:08:46.918 } 00:08:46.918 ], 00:08:46.918 "driver_specific": { 00:08:46.918 "raid": { 00:08:46.918 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:46.918 "strip_size_kb": 0, 00:08:46.918 "state": "online", 00:08:46.918 "raid_level": "raid1", 00:08:46.918 "superblock": true, 00:08:46.918 "num_base_bdevs": 2, 00:08:46.918 "num_base_bdevs_discovered": 2, 00:08:46.918 "num_base_bdevs_operational": 2, 00:08:46.918 "base_bdevs_list": [ 00:08:46.918 { 00:08:46.918 "name": "pt1", 00:08:46.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.918 "is_configured": true, 00:08:46.918 "data_offset": 2048, 00:08:46.918 "data_size": 63488 00:08:46.918 }, 00:08:46.918 { 00:08:46.918 "name": "pt2", 00:08:46.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.918 "is_configured": true, 00:08:46.918 "data_offset": 2048, 00:08:46.918 "data_size": 63488 00:08:46.918 } 00:08:46.918 ] 00:08:46.918 } 00:08:46.918 } 00:08:46.918 }' 00:08:46.918 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:47.179 pt2' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 [2024-11-20 17:43:14.277202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 14eb1603-a4cc-404b-9437-463b1d9b2c80 '!=' 14eb1603-a4cc-404b-9437-463b1d9b2c80 ']' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 [2024-11-20 17:43:14.304952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.179 "name": "raid_bdev1", 00:08:47.179 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:47.179 "strip_size_kb": 0, 00:08:47.179 "state": "online", 00:08:47.179 "raid_level": "raid1", 00:08:47.179 "superblock": true, 00:08:47.179 "num_base_bdevs": 2, 00:08:47.179 "num_base_bdevs_discovered": 1, 00:08:47.179 "num_base_bdevs_operational": 1, 00:08:47.179 "base_bdevs_list": [ 00:08:47.179 { 00:08:47.179 "name": null, 00:08:47.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.179 "is_configured": false, 00:08:47.179 "data_offset": 0, 00:08:47.179 "data_size": 63488 00:08:47.179 }, 00:08:47.179 { 00:08:47.179 "name": "pt2", 00:08:47.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.179 "is_configured": true, 00:08:47.179 "data_offset": 2048, 00:08:47.179 "data_size": 63488 00:08:47.179 } 00:08:47.179 ] 00:08:47.179 }' 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.179 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.749 [2024-11-20 17:43:14.700324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.749 [2024-11-20 17:43:14.700493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.749 [2024-11-20 17:43:14.700631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.749 [2024-11-20 17:43:14.700708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.749 [2024-11-20 17:43:14.700759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.749 [2024-11-20 17:43:14.772156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.749 [2024-11-20 17:43:14.772244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.749 [2024-11-20 17:43:14.772267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:47.749 [2024-11-20 17:43:14.772279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.749 [2024-11-20 17:43:14.775095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.749 [2024-11-20 17:43:14.775138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.749 [2024-11-20 17:43:14.775242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:47.749 [2024-11-20 17:43:14.775295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.749 [2024-11-20 17:43:14.775422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:47.749 [2024-11-20 17:43:14.775437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.749 [2024-11-20 17:43:14.775707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:47.749 [2024-11-20 17:43:14.775896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:47.749 [2024-11-20 17:43:14.775908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:47.749 [2024-11-20 17:43:14.776147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.749 pt2 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.749 "name": "raid_bdev1", 00:08:47.749 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:47.749 "strip_size_kb": 0, 00:08:47.749 "state": "online", 00:08:47.749 "raid_level": "raid1", 00:08:47.749 "superblock": true, 00:08:47.749 "num_base_bdevs": 2, 00:08:47.749 "num_base_bdevs_discovered": 1, 00:08:47.749 "num_base_bdevs_operational": 1, 00:08:47.749 "base_bdevs_list": [ 00:08:47.749 { 00:08:47.749 "name": null, 00:08:47.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.749 "is_configured": false, 00:08:47.749 "data_offset": 2048, 00:08:47.749 "data_size": 63488 00:08:47.749 }, 00:08:47.749 { 00:08:47.749 "name": "pt2", 00:08:47.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.749 "is_configured": true, 00:08:47.749 "data_offset": 2048, 00:08:47.749 "data_size": 63488 00:08:47.749 } 00:08:47.749 ] 00:08:47.749 }' 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.749 17:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.320 [2024-11-20 17:43:15.215482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.320 [2024-11-20 17:43:15.215636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.320 [2024-11-20 17:43:15.215774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.320 [2024-11-20 17:43:15.215874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.320 [2024-11-20 17:43:15.215924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.320 [2024-11-20 17:43:15.275391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:48.320 [2024-11-20 17:43:15.275557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.320 [2024-11-20 17:43:15.275606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:48.320 [2024-11-20 17:43:15.275638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.320 [2024-11-20 17:43:15.278348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.320 [2024-11-20 17:43:15.278464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:48.320 [2024-11-20 17:43:15.278606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:48.320 [2024-11-20 17:43:15.278689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:48.320 [2024-11-20 17:43:15.278877] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:48.320 [2024-11-20 17:43:15.278934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.320 [2024-11-20 17:43:15.278982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:48.320 [2024-11-20 17:43:15.279108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.320 [2024-11-20 17:43:15.279242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:48.320 [2024-11-20 17:43:15.279282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.320 [2024-11-20 17:43:15.279619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:48.320 [2024-11-20 17:43:15.279833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:48.320 [2024-11-20 17:43:15.279880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:48.320 [2024-11-20 17:43:15.280148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.320 pt1 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.320 "name": "raid_bdev1", 00:08:48.320 "uuid": "14eb1603-a4cc-404b-9437-463b1d9b2c80", 00:08:48.320 "strip_size_kb": 0, 00:08:48.320 "state": "online", 00:08:48.320 "raid_level": "raid1", 00:08:48.320 "superblock": true, 00:08:48.320 "num_base_bdevs": 2, 00:08:48.320 "num_base_bdevs_discovered": 1, 00:08:48.320 "num_base_bdevs_operational": 1, 00:08:48.320 "base_bdevs_list": [ 00:08:48.320 { 00:08:48.320 "name": null, 00:08:48.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.320 "is_configured": false, 00:08:48.320 "data_offset": 2048, 00:08:48.320 "data_size": 63488 00:08:48.320 }, 00:08:48.320 { 00:08:48.320 "name": "pt2", 00:08:48.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.320 "is_configured": true, 00:08:48.320 "data_offset": 2048, 00:08:48.320 "data_size": 63488 00:08:48.320 } 00:08:48.320 ] 00:08:48.320 }' 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.320 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.581 [2024-11-20 17:43:15.735577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.581 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 14eb1603-a4cc-404b-9437-463b1d9b2c80 '!=' 14eb1603-a4cc-404b-9437-463b1d9b2c80 ']' 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63589 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63589 ']' 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63589 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63589 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63589' 00:08:48.839 killing process with pid 63589 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63589 00:08:48.839 [2024-11-20 17:43:15.819708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.839 17:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63589 00:08:48.839 [2024-11-20 17:43:15.819942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.839 [2024-11-20 17:43:15.820003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.839 [2024-11-20 17:43:15.820040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:49.098 [2024-11-20 17:43:16.067767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.478 17:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:50.478 00:08:50.478 real 0m6.103s 00:08:50.478 user 0m9.080s 00:08:50.478 sys 0m1.108s 00:08:50.478 17:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.478 ************************************ 00:08:50.478 END TEST raid_superblock_test 00:08:50.478 ************************************ 00:08:50.478 17:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.478 17:43:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:50.478 17:43:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.478 17:43:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.478 17:43:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.478 ************************************ 00:08:50.478 START TEST raid_read_error_test 00:08:50.478 ************************************ 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kl7aZ2uEcb 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63914 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63914 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63914 ']' 00:08:50.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.478 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.479 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.479 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.479 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.479 17:43:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 [2024-11-20 17:43:17.426148] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:50.479 [2024-11-20 17:43:17.426278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63914 ] 00:08:50.479 [2024-11-20 17:43:17.607915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.755 [2024-11-20 17:43:17.728138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.015 [2024-11-20 17:43:17.936637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.015 [2024-11-20 17:43:17.936706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.276 BaseBdev1_malloc 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.276 true 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.276 [2024-11-20 17:43:18.365819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.276 [2024-11-20 17:43:18.365878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.276 [2024-11-20 17:43:18.365918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:51.276 [2024-11-20 17:43:18.365931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.276 [2024-11-20 17:43:18.368317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.276 [2024-11-20 17:43:18.368420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.276 BaseBdev1 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.276 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.277 BaseBdev2_malloc 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.277 true 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.277 [2024-11-20 17:43:18.430731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.277 [2024-11-20 17:43:18.430850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.277 [2024-11-20 17:43:18.430875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.277 [2024-11-20 17:43:18.430887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.277 [2024-11-20 17:43:18.433270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.277 [2024-11-20 17:43:18.433315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.277 BaseBdev2 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.277 [2024-11-20 17:43:18.442810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.277 [2024-11-20 17:43:18.444958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.277 [2024-11-20 17:43:18.445222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.277 [2024-11-20 17:43:18.445248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.277 [2024-11-20 17:43:18.445554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.277 [2024-11-20 17:43:18.445773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.277 [2024-11-20 17:43:18.445787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:51.277 [2024-11-20 17:43:18.445977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.277 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.536 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.536 "name": "raid_bdev1", 00:08:51.536 "uuid": "f777b625-5b8d-479b-a6b8-5eb12c95076a", 00:08:51.536 "strip_size_kb": 0, 00:08:51.536 "state": "online", 00:08:51.537 "raid_level": "raid1", 00:08:51.537 "superblock": true, 00:08:51.537 "num_base_bdevs": 2, 00:08:51.537 "num_base_bdevs_discovered": 2, 00:08:51.537 "num_base_bdevs_operational": 2, 00:08:51.537 "base_bdevs_list": [ 00:08:51.537 { 00:08:51.537 "name": "BaseBdev1", 00:08:51.537 "uuid": "a6395fbc-f65d-52c1-b2fe-01678c730477", 00:08:51.537 "is_configured": true, 00:08:51.537 "data_offset": 2048, 00:08:51.537 "data_size": 63488 00:08:51.537 }, 00:08:51.537 { 00:08:51.537 "name": "BaseBdev2", 00:08:51.537 "uuid": "08f9ef07-f9d7-55c8-b1c5-611fc6281cd8", 00:08:51.537 "is_configured": true, 00:08:51.537 "data_offset": 2048, 00:08:51.537 "data_size": 63488 00:08:51.537 } 00:08:51.537 ] 00:08:51.537 }' 00:08:51.537 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.537 17:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.795 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.795 17:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:51.795 [2024-11-20 17:43:18.963320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.733 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.992 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.992 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.992 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.992 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.992 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.992 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.993 "name": "raid_bdev1", 00:08:52.993 "uuid": "f777b625-5b8d-479b-a6b8-5eb12c95076a", 00:08:52.993 "strip_size_kb": 0, 00:08:52.993 "state": "online", 00:08:52.993 "raid_level": "raid1", 00:08:52.993 "superblock": true, 00:08:52.993 "num_base_bdevs": 2, 00:08:52.993 "num_base_bdevs_discovered": 2, 00:08:52.993 "num_base_bdevs_operational": 2, 00:08:52.993 "base_bdevs_list": [ 00:08:52.993 { 00:08:52.993 "name": "BaseBdev1", 00:08:52.993 "uuid": "a6395fbc-f65d-52c1-b2fe-01678c730477", 00:08:52.993 "is_configured": true, 00:08:52.993 "data_offset": 2048, 00:08:52.993 "data_size": 63488 00:08:52.993 }, 00:08:52.993 { 00:08:52.993 "name": "BaseBdev2", 00:08:52.993 "uuid": "08f9ef07-f9d7-55c8-b1c5-611fc6281cd8", 00:08:52.993 "is_configured": true, 00:08:52.993 "data_offset": 2048, 00:08:52.993 "data_size": 63488 00:08:52.993 } 00:08:52.993 ] 00:08:52.993 }' 00:08:52.993 17:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.993 17:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.253 [2024-11-20 17:43:20.306071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.253 [2024-11-20 17:43:20.306107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.253 [2024-11-20 17:43:20.308656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.253 [2024-11-20 17:43:20.308701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.253 [2024-11-20 17:43:20.308781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.253 [2024-11-20 17:43:20.308793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.253 { 00:08:53.253 "results": [ 00:08:53.253 { 00:08:53.253 "job": "raid_bdev1", 00:08:53.253 "core_mask": "0x1", 00:08:53.253 "workload": "randrw", 00:08:53.253 "percentage": 50, 00:08:53.253 "status": "finished", 00:08:53.253 "queue_depth": 1, 00:08:53.253 "io_size": 131072, 00:08:53.253 "runtime": 1.343368, 00:08:53.253 "iops": 17627.336664264745, 00:08:53.253 "mibps": 2203.417083033093, 00:08:53.253 "io_failed": 0, 00:08:53.253 "io_timeout": 0, 00:08:53.253 "avg_latency_us": 54.083573704709075, 00:08:53.253 "min_latency_us": 23.14061135371179, 00:08:53.253 "max_latency_us": 1495.3082969432314 00:08:53.253 } 00:08:53.253 ], 00:08:53.253 "core_count": 1 00:08:53.253 } 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63914 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63914 ']' 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63914 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63914 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63914' 00:08:53.253 killing process with pid 63914 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63914 00:08:53.253 [2024-11-20 17:43:20.355293] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.253 17:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63914 00:08:53.513 [2024-11-20 17:43:20.490411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kl7aZ2uEcb 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:54.922 00:08:54.922 real 0m4.394s 00:08:54.922 user 0m5.263s 00:08:54.922 sys 0m0.531s 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.922 ************************************ 00:08:54.922 END TEST raid_read_error_test 00:08:54.922 ************************************ 00:08:54.922 17:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.922 17:43:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:54.922 17:43:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.922 17:43:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.922 17:43:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.922 ************************************ 00:08:54.922 START TEST raid_write_error_test 00:08:54.922 ************************************ 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iS6SuvrnJj 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64064 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64064 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:54.922 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64064 ']' 00:08:54.923 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.923 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.923 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.923 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.923 17:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.923 [2024-11-20 17:43:21.887921] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:54.923 [2024-11-20 17:43:21.888150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64064 ] 00:08:54.923 [2024-11-20 17:43:22.065032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.204 [2024-11-20 17:43:22.184540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.463 [2024-11-20 17:43:22.393784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.463 [2024-11-20 17:43:22.393849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 BaseBdev1_malloc 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 true 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 [2024-11-20 17:43:22.808891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:55.723 [2024-11-20 17:43:22.808995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.723 [2024-11-20 17:43:22.809050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:55.723 [2024-11-20 17:43:22.809086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.723 [2024-11-20 17:43:22.811239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.723 [2024-11-20 17:43:22.811320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:55.723 BaseBdev1 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 BaseBdev2_malloc 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 true 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 [2024-11-20 17:43:22.879329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:55.723 [2024-11-20 17:43:22.879395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.723 [2024-11-20 17:43:22.879417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:55.723 [2024-11-20 17:43:22.879429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.723 [2024-11-20 17:43:22.882028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.723 [2024-11-20 17:43:22.882130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:55.723 BaseBdev2 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.723 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.723 [2024-11-20 17:43:22.891385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.723 [2024-11-20 17:43:22.893565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.723 [2024-11-20 17:43:22.893895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.723 [2024-11-20 17:43:22.893961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.723 [2024-11-20 17:43:22.894327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:55.723 [2024-11-20 17:43:22.894581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.723 [2024-11-20 17:43:22.894633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:55.723 [2024-11-20 17:43:22.894899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.983 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.984 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.984 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.984 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.984 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.984 "name": "raid_bdev1", 00:08:55.984 "uuid": "ba2fe55f-050f-4635-b27d-cf3851bda560", 00:08:55.984 "strip_size_kb": 0, 00:08:55.984 "state": "online", 00:08:55.984 "raid_level": "raid1", 00:08:55.984 "superblock": true, 00:08:55.984 "num_base_bdevs": 2, 00:08:55.984 "num_base_bdevs_discovered": 2, 00:08:55.984 "num_base_bdevs_operational": 2, 00:08:55.984 "base_bdevs_list": [ 00:08:55.984 { 00:08:55.984 "name": "BaseBdev1", 00:08:55.984 "uuid": "a09d6bc7-43c5-5a8b-bb1c-b463d24536bf", 00:08:55.984 "is_configured": true, 00:08:55.984 "data_offset": 2048, 00:08:55.984 "data_size": 63488 00:08:55.984 }, 00:08:55.984 { 00:08:55.984 "name": "BaseBdev2", 00:08:55.984 "uuid": "bde25a33-ff40-5ae6-ab95-e516dc8bac40", 00:08:55.984 "is_configured": true, 00:08:55.984 "data_offset": 2048, 00:08:55.984 "data_size": 63488 00:08:55.984 } 00:08:55.984 ] 00:08:55.984 }' 00:08:55.984 17:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.984 17:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.244 17:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:56.244 17:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:56.504 [2024-11-20 17:43:23.459689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.443 [2024-11-20 17:43:24.376235] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:57.443 [2024-11-20 17:43:24.376298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.443 [2024-11-20 17:43:24.376512] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.443 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.444 "name": "raid_bdev1", 00:08:57.444 "uuid": "ba2fe55f-050f-4635-b27d-cf3851bda560", 00:08:57.444 "strip_size_kb": 0, 00:08:57.444 "state": "online", 00:08:57.444 "raid_level": "raid1", 00:08:57.444 "superblock": true, 00:08:57.444 "num_base_bdevs": 2, 00:08:57.444 "num_base_bdevs_discovered": 1, 00:08:57.444 "num_base_bdevs_operational": 1, 00:08:57.444 "base_bdevs_list": [ 00:08:57.444 { 00:08:57.444 "name": null, 00:08:57.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.444 "is_configured": false, 00:08:57.444 "data_offset": 0, 00:08:57.444 "data_size": 63488 00:08:57.444 }, 00:08:57.444 { 00:08:57.444 "name": "BaseBdev2", 00:08:57.444 "uuid": "bde25a33-ff40-5ae6-ab95-e516dc8bac40", 00:08:57.444 "is_configured": true, 00:08:57.444 "data_offset": 2048, 00:08:57.444 "data_size": 63488 00:08:57.444 } 00:08:57.444 ] 00:08:57.444 }' 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.444 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 [2024-11-20 17:43:24.838521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.748 [2024-11-20 17:43:24.838562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.748 [2024-11-20 17:43:24.841532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.748 [2024-11-20 17:43:24.841586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.748 [2024-11-20 17:43:24.841649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.748 [2024-11-20 17:43:24.841660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:57.748 { 00:08:57.748 "results": [ 00:08:57.748 { 00:08:57.748 "job": "raid_bdev1", 00:08:57.748 "core_mask": "0x1", 00:08:57.748 "workload": "randrw", 00:08:57.748 "percentage": 50, 00:08:57.748 "status": "finished", 00:08:57.748 "queue_depth": 1, 00:08:57.748 "io_size": 131072, 00:08:57.748 "runtime": 1.379614, 00:08:57.748 "iops": 19788.868480603996, 00:08:57.748 "mibps": 2473.6085600754996, 00:08:57.748 "io_failed": 0, 00:08:57.748 "io_timeout": 0, 00:08:57.748 "avg_latency_us": 47.772470352750325, 00:08:57.748 "min_latency_us": 22.581659388646287, 00:08:57.748 "max_latency_us": 1423.7624454148472 00:08:57.748 } 00:08:57.748 ], 00:08:57.748 "core_count": 1 00:08:57.748 } 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64064 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64064 ']' 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64064 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.748 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64064 00:08:58.007 killing process with pid 64064 00:08:58.007 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.007 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.007 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64064' 00:08:58.007 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64064 00:08:58.007 [2024-11-20 17:43:24.899870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.007 17:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64064 00:08:58.007 [2024-11-20 17:43:25.043631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iS6SuvrnJj 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:59.386 00:08:59.386 real 0m4.506s 00:08:59.386 user 0m5.422s 00:08:59.386 sys 0m0.566s 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.386 17:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.386 ************************************ 00:08:59.386 END TEST raid_write_error_test 00:08:59.386 ************************************ 00:08:59.386 17:43:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:59.386 17:43:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:59.386 17:43:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:59.386 17:43:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:59.386 17:43:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.386 17:43:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.386 ************************************ 00:08:59.386 START TEST raid_state_function_test 00:08:59.386 ************************************ 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64203 00:08:59.386 Process raid pid: 64203 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64203' 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64203 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64203 ']' 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.386 17:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.386 [2024-11-20 17:43:26.454199] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:59.386 [2024-11-20 17:43:26.454326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.646 [2024-11-20 17:43:26.607532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.646 [2024-11-20 17:43:26.728767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.905 [2024-11-20 17:43:26.937292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.905 [2024-11-20 17:43:26.937348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.164 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.165 [2024-11-20 17:43:27.327908] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.165 [2024-11-20 17:43:27.327964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.165 [2024-11-20 17:43:27.327975] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.165 [2024-11-20 17:43:27.327985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.165 [2024-11-20 17:43:27.327992] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.165 [2024-11-20 17:43:27.328001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.165 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.424 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.424 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.424 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.424 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.424 "name": "Existed_Raid", 00:09:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.424 "strip_size_kb": 64, 00:09:00.424 "state": "configuring", 00:09:00.424 "raid_level": "raid0", 00:09:00.424 "superblock": false, 00:09:00.424 "num_base_bdevs": 3, 00:09:00.424 "num_base_bdevs_discovered": 0, 00:09:00.424 "num_base_bdevs_operational": 3, 00:09:00.424 "base_bdevs_list": [ 00:09:00.424 { 00:09:00.424 "name": "BaseBdev1", 00:09:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.424 "is_configured": false, 00:09:00.424 "data_offset": 0, 00:09:00.424 "data_size": 0 00:09:00.424 }, 00:09:00.424 { 00:09:00.424 "name": "BaseBdev2", 00:09:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.424 "is_configured": false, 00:09:00.424 "data_offset": 0, 00:09:00.424 "data_size": 0 00:09:00.424 }, 00:09:00.424 { 00:09:00.424 "name": "BaseBdev3", 00:09:00.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.424 "is_configured": false, 00:09:00.424 "data_offset": 0, 00:09:00.424 "data_size": 0 00:09:00.424 } 00:09:00.424 ] 00:09:00.424 }' 00:09:00.425 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.425 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 [2024-11-20 17:43:27.803087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.684 [2024-11-20 17:43:27.803125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 [2024-11-20 17:43:27.811041] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.684 [2024-11-20 17:43:27.811084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.684 [2024-11-20 17:43:27.811093] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.684 [2024-11-20 17:43:27.811102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.684 [2024-11-20 17:43:27.811109] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.684 [2024-11-20 17:43:27.811117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 [2024-11-20 17:43:27.855453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.684 BaseBdev1 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.684 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.944 [ 00:09:00.944 { 00:09:00.944 "name": "BaseBdev1", 00:09:00.944 "aliases": [ 00:09:00.944 "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee" 00:09:00.944 ], 00:09:00.944 "product_name": "Malloc disk", 00:09:00.944 "block_size": 512, 00:09:00.944 "num_blocks": 65536, 00:09:00.944 "uuid": "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee", 00:09:00.944 "assigned_rate_limits": { 00:09:00.944 "rw_ios_per_sec": 0, 00:09:00.944 "rw_mbytes_per_sec": 0, 00:09:00.944 "r_mbytes_per_sec": 0, 00:09:00.944 "w_mbytes_per_sec": 0 00:09:00.944 }, 00:09:00.944 "claimed": true, 00:09:00.944 "claim_type": "exclusive_write", 00:09:00.944 "zoned": false, 00:09:00.944 "supported_io_types": { 00:09:00.944 "read": true, 00:09:00.944 "write": true, 00:09:00.944 "unmap": true, 00:09:00.944 "flush": true, 00:09:00.944 "reset": true, 00:09:00.944 "nvme_admin": false, 00:09:00.944 "nvme_io": false, 00:09:00.944 "nvme_io_md": false, 00:09:00.944 "write_zeroes": true, 00:09:00.944 "zcopy": true, 00:09:00.944 "get_zone_info": false, 00:09:00.944 "zone_management": false, 00:09:00.944 "zone_append": false, 00:09:00.944 "compare": false, 00:09:00.944 "compare_and_write": false, 00:09:00.944 "abort": true, 00:09:00.944 "seek_hole": false, 00:09:00.944 "seek_data": false, 00:09:00.944 "copy": true, 00:09:00.944 "nvme_iov_md": false 00:09:00.944 }, 00:09:00.944 "memory_domains": [ 00:09:00.944 { 00:09:00.944 "dma_device_id": "system", 00:09:00.944 "dma_device_type": 1 00:09:00.944 }, 00:09:00.944 { 00:09:00.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.944 "dma_device_type": 2 00:09:00.944 } 00:09:00.944 ], 00:09:00.944 "driver_specific": {} 00:09:00.944 } 00:09:00.944 ] 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.944 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.945 "name": "Existed_Raid", 00:09:00.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.945 "strip_size_kb": 64, 00:09:00.945 "state": "configuring", 00:09:00.945 "raid_level": "raid0", 00:09:00.945 "superblock": false, 00:09:00.945 "num_base_bdevs": 3, 00:09:00.945 "num_base_bdevs_discovered": 1, 00:09:00.945 "num_base_bdevs_operational": 3, 00:09:00.945 "base_bdevs_list": [ 00:09:00.945 { 00:09:00.945 "name": "BaseBdev1", 00:09:00.945 "uuid": "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee", 00:09:00.945 "is_configured": true, 00:09:00.945 "data_offset": 0, 00:09:00.945 "data_size": 65536 00:09:00.945 }, 00:09:00.945 { 00:09:00.945 "name": "BaseBdev2", 00:09:00.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.945 "is_configured": false, 00:09:00.945 "data_offset": 0, 00:09:00.945 "data_size": 0 00:09:00.945 }, 00:09:00.945 { 00:09:00.945 "name": "BaseBdev3", 00:09:00.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.945 "is_configured": false, 00:09:00.945 "data_offset": 0, 00:09:00.945 "data_size": 0 00:09:00.945 } 00:09:00.945 ] 00:09:00.945 }' 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.945 17:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.205 [2024-11-20 17:43:28.318730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.205 [2024-11-20 17:43:28.318795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.205 [2024-11-20 17:43:28.326747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.205 [2024-11-20 17:43:28.328667] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.205 [2024-11-20 17:43:28.328781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.205 [2024-11-20 17:43:28.328796] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.205 [2024-11-20 17:43:28.328806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.205 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.464 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.464 "name": "Existed_Raid", 00:09:01.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.464 "strip_size_kb": 64, 00:09:01.464 "state": "configuring", 00:09:01.464 "raid_level": "raid0", 00:09:01.464 "superblock": false, 00:09:01.464 "num_base_bdevs": 3, 00:09:01.464 "num_base_bdevs_discovered": 1, 00:09:01.464 "num_base_bdevs_operational": 3, 00:09:01.464 "base_bdevs_list": [ 00:09:01.464 { 00:09:01.464 "name": "BaseBdev1", 00:09:01.464 "uuid": "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee", 00:09:01.464 "is_configured": true, 00:09:01.464 "data_offset": 0, 00:09:01.464 "data_size": 65536 00:09:01.464 }, 00:09:01.464 { 00:09:01.464 "name": "BaseBdev2", 00:09:01.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.464 "is_configured": false, 00:09:01.464 "data_offset": 0, 00:09:01.464 "data_size": 0 00:09:01.464 }, 00:09:01.464 { 00:09:01.464 "name": "BaseBdev3", 00:09:01.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.464 "is_configured": false, 00:09:01.464 "data_offset": 0, 00:09:01.464 "data_size": 0 00:09:01.464 } 00:09:01.464 ] 00:09:01.464 }' 00:09:01.464 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.464 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.723 [2024-11-20 17:43:28.821412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.723 BaseBdev2 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.723 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.723 [ 00:09:01.723 { 00:09:01.723 "name": "BaseBdev2", 00:09:01.723 "aliases": [ 00:09:01.723 "ad479d50-c6c3-4f26-b78a-166aa89e7278" 00:09:01.723 ], 00:09:01.723 "product_name": "Malloc disk", 00:09:01.723 "block_size": 512, 00:09:01.723 "num_blocks": 65536, 00:09:01.723 "uuid": "ad479d50-c6c3-4f26-b78a-166aa89e7278", 00:09:01.723 "assigned_rate_limits": { 00:09:01.723 "rw_ios_per_sec": 0, 00:09:01.723 "rw_mbytes_per_sec": 0, 00:09:01.723 "r_mbytes_per_sec": 0, 00:09:01.723 "w_mbytes_per_sec": 0 00:09:01.723 }, 00:09:01.723 "claimed": true, 00:09:01.723 "claim_type": "exclusive_write", 00:09:01.723 "zoned": false, 00:09:01.723 "supported_io_types": { 00:09:01.723 "read": true, 00:09:01.723 "write": true, 00:09:01.723 "unmap": true, 00:09:01.723 "flush": true, 00:09:01.723 "reset": true, 00:09:01.723 "nvme_admin": false, 00:09:01.723 "nvme_io": false, 00:09:01.723 "nvme_io_md": false, 00:09:01.723 "write_zeroes": true, 00:09:01.724 "zcopy": true, 00:09:01.724 "get_zone_info": false, 00:09:01.724 "zone_management": false, 00:09:01.724 "zone_append": false, 00:09:01.724 "compare": false, 00:09:01.724 "compare_and_write": false, 00:09:01.724 "abort": true, 00:09:01.724 "seek_hole": false, 00:09:01.724 "seek_data": false, 00:09:01.724 "copy": true, 00:09:01.724 "nvme_iov_md": false 00:09:01.724 }, 00:09:01.724 "memory_domains": [ 00:09:01.724 { 00:09:01.724 "dma_device_id": "system", 00:09:01.724 "dma_device_type": 1 00:09:01.724 }, 00:09:01.724 { 00:09:01.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.724 "dma_device_type": 2 00:09:01.724 } 00:09:01.724 ], 00:09:01.724 "driver_specific": {} 00:09:01.724 } 00:09:01.724 ] 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.724 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.984 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.984 "name": "Existed_Raid", 00:09:01.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.984 "strip_size_kb": 64, 00:09:01.984 "state": "configuring", 00:09:01.984 "raid_level": "raid0", 00:09:01.984 "superblock": false, 00:09:01.984 "num_base_bdevs": 3, 00:09:01.984 "num_base_bdevs_discovered": 2, 00:09:01.984 "num_base_bdevs_operational": 3, 00:09:01.984 "base_bdevs_list": [ 00:09:01.984 { 00:09:01.984 "name": "BaseBdev1", 00:09:01.984 "uuid": "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee", 00:09:01.984 "is_configured": true, 00:09:01.984 "data_offset": 0, 00:09:01.984 "data_size": 65536 00:09:01.984 }, 00:09:01.984 { 00:09:01.984 "name": "BaseBdev2", 00:09:01.984 "uuid": "ad479d50-c6c3-4f26-b78a-166aa89e7278", 00:09:01.984 "is_configured": true, 00:09:01.984 "data_offset": 0, 00:09:01.984 "data_size": 65536 00:09:01.984 }, 00:09:01.984 { 00:09:01.984 "name": "BaseBdev3", 00:09:01.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.984 "is_configured": false, 00:09:01.984 "data_offset": 0, 00:09:01.984 "data_size": 0 00:09:01.984 } 00:09:01.984 ] 00:09:01.984 }' 00:09:01.984 17:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.984 17:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.244 [2024-11-20 17:43:29.351861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.244 [2024-11-20 17:43:29.351903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.244 [2024-11-20 17:43:29.351917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:02.244 [2024-11-20 17:43:29.352315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:02.244 [2024-11-20 17:43:29.352600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.244 [2024-11-20 17:43:29.352663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:09:02.244 id_bdev 0x617000007e80 00:09:02.244 [2024-11-20 17:43:29.353034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.244 [ 00:09:02.244 { 00:09:02.244 "name": "BaseBdev3", 00:09:02.244 "aliases": [ 00:09:02.244 "9ebc6a70-7869-4265-982c-0144355b1b2d" 00:09:02.244 ], 00:09:02.244 "product_name": "Malloc disk", 00:09:02.244 "block_size": 512, 00:09:02.244 "num_blocks": 65536, 00:09:02.244 "uuid": "9ebc6a70-7869-4265-982c-0144355b1b2d", 00:09:02.244 "assigned_rate_limits": { 00:09:02.244 "rw_ios_per_sec": 0, 00:09:02.244 "rw_mbytes_per_sec": 0, 00:09:02.244 "r_mbytes_per_sec": 0, 00:09:02.244 "w_mbytes_per_sec": 0 00:09:02.244 }, 00:09:02.244 "claimed": true, 00:09:02.244 "claim_type": "exclusive_write", 00:09:02.244 "zoned": false, 00:09:02.244 "supported_io_types": { 00:09:02.244 "read": true, 00:09:02.244 "write": true, 00:09:02.244 "unmap": true, 00:09:02.244 "flush": true, 00:09:02.244 "reset": true, 00:09:02.244 "nvme_admin": false, 00:09:02.244 "nvme_io": false, 00:09:02.244 "nvme_io_md": false, 00:09:02.244 "write_zeroes": true, 00:09:02.244 "zcopy": true, 00:09:02.244 "get_zone_info": false, 00:09:02.244 "zone_management": false, 00:09:02.244 "zone_append": false, 00:09:02.244 "compare": false, 00:09:02.244 "compare_and_write": false, 00:09:02.244 "abort": true, 00:09:02.244 "seek_hole": false, 00:09:02.244 "seek_data": false, 00:09:02.244 "copy": true, 00:09:02.244 "nvme_iov_md": false 00:09:02.244 }, 00:09:02.244 "memory_domains": [ 00:09:02.244 { 00:09:02.244 "dma_device_id": "system", 00:09:02.244 "dma_device_type": 1 00:09:02.244 }, 00:09:02.244 { 00:09:02.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.244 "dma_device_type": 2 00:09:02.244 } 00:09:02.244 ], 00:09:02.244 "driver_specific": {} 00:09:02.244 } 00:09:02.244 ] 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.244 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.503 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.503 "name": "Existed_Raid", 00:09:02.503 "uuid": "f8f76657-787a-4efa-99ce-ea2eaf14a227", 00:09:02.503 "strip_size_kb": 64, 00:09:02.503 "state": "online", 00:09:02.503 "raid_level": "raid0", 00:09:02.503 "superblock": false, 00:09:02.503 "num_base_bdevs": 3, 00:09:02.503 "num_base_bdevs_discovered": 3, 00:09:02.503 "num_base_bdevs_operational": 3, 00:09:02.503 "base_bdevs_list": [ 00:09:02.503 { 00:09:02.503 "name": "BaseBdev1", 00:09:02.503 "uuid": "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee", 00:09:02.503 "is_configured": true, 00:09:02.503 "data_offset": 0, 00:09:02.503 "data_size": 65536 00:09:02.503 }, 00:09:02.503 { 00:09:02.503 "name": "BaseBdev2", 00:09:02.503 "uuid": "ad479d50-c6c3-4f26-b78a-166aa89e7278", 00:09:02.503 "is_configured": true, 00:09:02.503 "data_offset": 0, 00:09:02.503 "data_size": 65536 00:09:02.503 }, 00:09:02.503 { 00:09:02.503 "name": "BaseBdev3", 00:09:02.503 "uuid": "9ebc6a70-7869-4265-982c-0144355b1b2d", 00:09:02.503 "is_configured": true, 00:09:02.503 "data_offset": 0, 00:09:02.503 "data_size": 65536 00:09:02.503 } 00:09:02.503 ] 00:09:02.503 }' 00:09:02.503 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.503 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.762 [2024-11-20 17:43:29.839270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.762 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.762 "name": "Existed_Raid", 00:09:02.762 "aliases": [ 00:09:02.762 "f8f76657-787a-4efa-99ce-ea2eaf14a227" 00:09:02.762 ], 00:09:02.762 "product_name": "Raid Volume", 00:09:02.762 "block_size": 512, 00:09:02.762 "num_blocks": 196608, 00:09:02.762 "uuid": "f8f76657-787a-4efa-99ce-ea2eaf14a227", 00:09:02.762 "assigned_rate_limits": { 00:09:02.762 "rw_ios_per_sec": 0, 00:09:02.762 "rw_mbytes_per_sec": 0, 00:09:02.762 "r_mbytes_per_sec": 0, 00:09:02.762 "w_mbytes_per_sec": 0 00:09:02.762 }, 00:09:02.762 "claimed": false, 00:09:02.762 "zoned": false, 00:09:02.762 "supported_io_types": { 00:09:02.762 "read": true, 00:09:02.762 "write": true, 00:09:02.762 "unmap": true, 00:09:02.762 "flush": true, 00:09:02.762 "reset": true, 00:09:02.762 "nvme_admin": false, 00:09:02.762 "nvme_io": false, 00:09:02.762 "nvme_io_md": false, 00:09:02.762 "write_zeroes": true, 00:09:02.762 "zcopy": false, 00:09:02.762 "get_zone_info": false, 00:09:02.762 "zone_management": false, 00:09:02.762 "zone_append": false, 00:09:02.762 "compare": false, 00:09:02.762 "compare_and_write": false, 00:09:02.762 "abort": false, 00:09:02.762 "seek_hole": false, 00:09:02.762 "seek_data": false, 00:09:02.762 "copy": false, 00:09:02.762 "nvme_iov_md": false 00:09:02.762 }, 00:09:02.762 "memory_domains": [ 00:09:02.762 { 00:09:02.762 "dma_device_id": "system", 00:09:02.762 "dma_device_type": 1 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.762 "dma_device_type": 2 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "dma_device_id": "system", 00:09:02.762 "dma_device_type": 1 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.762 "dma_device_type": 2 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "dma_device_id": "system", 00:09:02.762 "dma_device_type": 1 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.762 "dma_device_type": 2 00:09:02.762 } 00:09:02.762 ], 00:09:02.762 "driver_specific": { 00:09:02.762 "raid": { 00:09:02.762 "uuid": "f8f76657-787a-4efa-99ce-ea2eaf14a227", 00:09:02.762 "strip_size_kb": 64, 00:09:02.762 "state": "online", 00:09:02.762 "raid_level": "raid0", 00:09:02.762 "superblock": false, 00:09:02.762 "num_base_bdevs": 3, 00:09:02.762 "num_base_bdevs_discovered": 3, 00:09:02.762 "num_base_bdevs_operational": 3, 00:09:02.762 "base_bdevs_list": [ 00:09:02.762 { 00:09:02.762 "name": "BaseBdev1", 00:09:02.762 "uuid": "d113b7fc-f2e5-4b12-86cd-c4ae3b080eee", 00:09:02.762 "is_configured": true, 00:09:02.762 "data_offset": 0, 00:09:02.762 "data_size": 65536 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "name": "BaseBdev2", 00:09:02.762 "uuid": "ad479d50-c6c3-4f26-b78a-166aa89e7278", 00:09:02.762 "is_configured": true, 00:09:02.762 "data_offset": 0, 00:09:02.762 "data_size": 65536 00:09:02.762 }, 00:09:02.763 { 00:09:02.763 "name": "BaseBdev3", 00:09:02.763 "uuid": "9ebc6a70-7869-4265-982c-0144355b1b2d", 00:09:02.763 "is_configured": true, 00:09:02.763 "data_offset": 0, 00:09:02.763 "data_size": 65536 00:09:02.763 } 00:09:02.763 ] 00:09:02.763 } 00:09:02.763 } 00:09:02.763 }' 00:09:02.763 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.763 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:02.763 BaseBdev2 00:09:02.763 BaseBdev3' 00:09:02.763 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.021 17:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.021 [2024-11-20 17:43:30.054647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.021 [2024-11-20 17:43:30.054677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.021 [2024-11-20 17:43:30.054732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.021 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.022 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.280 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.280 "name": "Existed_Raid", 00:09:03.280 "uuid": "f8f76657-787a-4efa-99ce-ea2eaf14a227", 00:09:03.280 "strip_size_kb": 64, 00:09:03.280 "state": "offline", 00:09:03.280 "raid_level": "raid0", 00:09:03.280 "superblock": false, 00:09:03.280 "num_base_bdevs": 3, 00:09:03.280 "num_base_bdevs_discovered": 2, 00:09:03.280 "num_base_bdevs_operational": 2, 00:09:03.280 "base_bdevs_list": [ 00:09:03.280 { 00:09:03.280 "name": null, 00:09:03.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.280 "is_configured": false, 00:09:03.280 "data_offset": 0, 00:09:03.280 "data_size": 65536 00:09:03.280 }, 00:09:03.280 { 00:09:03.280 "name": "BaseBdev2", 00:09:03.280 "uuid": "ad479d50-c6c3-4f26-b78a-166aa89e7278", 00:09:03.280 "is_configured": true, 00:09:03.280 "data_offset": 0, 00:09:03.280 "data_size": 65536 00:09:03.280 }, 00:09:03.280 { 00:09:03.280 "name": "BaseBdev3", 00:09:03.280 "uuid": "9ebc6a70-7869-4265-982c-0144355b1b2d", 00:09:03.280 "is_configured": true, 00:09:03.280 "data_offset": 0, 00:09:03.280 "data_size": 65536 00:09:03.280 } 00:09:03.280 ] 00:09:03.280 }' 00:09:03.280 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.280 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.540 [2024-11-20 17:43:30.614110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:03.540 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.799 [2024-11-20 17:43:30.767607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.799 [2024-11-20 17:43:30.767706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.799 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 BaseBdev2 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.059 17:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 [ 00:09:04.059 { 00:09:04.059 "name": "BaseBdev2", 00:09:04.059 "aliases": [ 00:09:04.059 "0d141d29-d5fb-43d7-bb1f-0792ad4740f4" 00:09:04.059 ], 00:09:04.059 "product_name": "Malloc disk", 00:09:04.059 "block_size": 512, 00:09:04.059 "num_blocks": 65536, 00:09:04.059 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:04.059 "assigned_rate_limits": { 00:09:04.059 "rw_ios_per_sec": 0, 00:09:04.059 "rw_mbytes_per_sec": 0, 00:09:04.059 "r_mbytes_per_sec": 0, 00:09:04.059 "w_mbytes_per_sec": 0 00:09:04.059 }, 00:09:04.059 "claimed": false, 00:09:04.059 "zoned": false, 00:09:04.059 "supported_io_types": { 00:09:04.059 "read": true, 00:09:04.059 "write": true, 00:09:04.059 "unmap": true, 00:09:04.059 "flush": true, 00:09:04.059 "reset": true, 00:09:04.059 "nvme_admin": false, 00:09:04.059 "nvme_io": false, 00:09:04.059 "nvme_io_md": false, 00:09:04.059 "write_zeroes": true, 00:09:04.059 "zcopy": true, 00:09:04.059 "get_zone_info": false, 00:09:04.059 "zone_management": false, 00:09:04.059 "zone_append": false, 00:09:04.059 "compare": false, 00:09:04.059 "compare_and_write": false, 00:09:04.059 "abort": true, 00:09:04.059 "seek_hole": false, 00:09:04.059 "seek_data": false, 00:09:04.059 "copy": true, 00:09:04.059 "nvme_iov_md": false 00:09:04.059 }, 00:09:04.059 "memory_domains": [ 00:09:04.059 { 00:09:04.059 "dma_device_id": "system", 00:09:04.059 "dma_device_type": 1 00:09:04.059 }, 00:09:04.059 { 00:09:04.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.059 "dma_device_type": 2 00:09:04.059 } 00:09:04.059 ], 00:09:04.059 "driver_specific": {} 00:09:04.059 } 00:09:04.059 ] 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 BaseBdev3 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:04.059 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 [ 00:09:04.060 { 00:09:04.060 "name": "BaseBdev3", 00:09:04.060 "aliases": [ 00:09:04.060 "e6933495-c075-41cb-8398-f6f4545a1a86" 00:09:04.060 ], 00:09:04.060 "product_name": "Malloc disk", 00:09:04.060 "block_size": 512, 00:09:04.060 "num_blocks": 65536, 00:09:04.060 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:04.060 "assigned_rate_limits": { 00:09:04.060 "rw_ios_per_sec": 0, 00:09:04.060 "rw_mbytes_per_sec": 0, 00:09:04.060 "r_mbytes_per_sec": 0, 00:09:04.060 "w_mbytes_per_sec": 0 00:09:04.060 }, 00:09:04.060 "claimed": false, 00:09:04.060 "zoned": false, 00:09:04.060 "supported_io_types": { 00:09:04.060 "read": true, 00:09:04.060 "write": true, 00:09:04.060 "unmap": true, 00:09:04.060 "flush": true, 00:09:04.060 "reset": true, 00:09:04.060 "nvme_admin": false, 00:09:04.060 "nvme_io": false, 00:09:04.060 "nvme_io_md": false, 00:09:04.060 "write_zeroes": true, 00:09:04.060 "zcopy": true, 00:09:04.060 "get_zone_info": false, 00:09:04.060 "zone_management": false, 00:09:04.060 "zone_append": false, 00:09:04.060 "compare": false, 00:09:04.060 "compare_and_write": false, 00:09:04.060 "abort": true, 00:09:04.060 "seek_hole": false, 00:09:04.060 "seek_data": false, 00:09:04.060 "copy": true, 00:09:04.060 "nvme_iov_md": false 00:09:04.060 }, 00:09:04.060 "memory_domains": [ 00:09:04.060 { 00:09:04.060 "dma_device_id": "system", 00:09:04.060 "dma_device_type": 1 00:09:04.060 }, 00:09:04.060 { 00:09:04.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.060 "dma_device_type": 2 00:09:04.060 } 00:09:04.060 ], 00:09:04.060 "driver_specific": {} 00:09:04.060 } 00:09:04.060 ] 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 [2024-11-20 17:43:31.115570] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.060 [2024-11-20 17:43:31.115672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.060 [2024-11-20 17:43:31.115717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.060 [2024-11-20 17:43:31.117536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.060 "name": "Existed_Raid", 00:09:04.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.060 "strip_size_kb": 64, 00:09:04.060 "state": "configuring", 00:09:04.060 "raid_level": "raid0", 00:09:04.060 "superblock": false, 00:09:04.060 "num_base_bdevs": 3, 00:09:04.060 "num_base_bdevs_discovered": 2, 00:09:04.060 "num_base_bdevs_operational": 3, 00:09:04.060 "base_bdevs_list": [ 00:09:04.060 { 00:09:04.060 "name": "BaseBdev1", 00:09:04.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.060 "is_configured": false, 00:09:04.060 "data_offset": 0, 00:09:04.060 "data_size": 0 00:09:04.060 }, 00:09:04.060 { 00:09:04.060 "name": "BaseBdev2", 00:09:04.060 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:04.060 "is_configured": true, 00:09:04.060 "data_offset": 0, 00:09:04.060 "data_size": 65536 00:09:04.060 }, 00:09:04.060 { 00:09:04.060 "name": "BaseBdev3", 00:09:04.060 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:04.060 "is_configured": true, 00:09:04.060 "data_offset": 0, 00:09:04.060 "data_size": 65536 00:09:04.060 } 00:09:04.060 ] 00:09:04.060 }' 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.060 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.630 [2024-11-20 17:43:31.610751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.630 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.630 "name": "Existed_Raid", 00:09:04.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.630 "strip_size_kb": 64, 00:09:04.630 "state": "configuring", 00:09:04.630 "raid_level": "raid0", 00:09:04.630 "superblock": false, 00:09:04.630 "num_base_bdevs": 3, 00:09:04.630 "num_base_bdevs_discovered": 1, 00:09:04.630 "num_base_bdevs_operational": 3, 00:09:04.630 "base_bdevs_list": [ 00:09:04.630 { 00:09:04.630 "name": "BaseBdev1", 00:09:04.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.630 "is_configured": false, 00:09:04.630 "data_offset": 0, 00:09:04.630 "data_size": 0 00:09:04.630 }, 00:09:04.630 { 00:09:04.630 "name": null, 00:09:04.630 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:04.630 "is_configured": false, 00:09:04.630 "data_offset": 0, 00:09:04.630 "data_size": 65536 00:09:04.630 }, 00:09:04.630 { 00:09:04.630 "name": "BaseBdev3", 00:09:04.630 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:04.631 "is_configured": true, 00:09:04.631 "data_offset": 0, 00:09:04.631 "data_size": 65536 00:09:04.631 } 00:09:04.631 ] 00:09:04.631 }' 00:09:04.631 17:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.631 17:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.890 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.890 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.890 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.890 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.151 [2024-11-20 17:43:32.139418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.151 BaseBdev1 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.151 [ 00:09:05.151 { 00:09:05.151 "name": "BaseBdev1", 00:09:05.151 "aliases": [ 00:09:05.151 "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd" 00:09:05.151 ], 00:09:05.151 "product_name": "Malloc disk", 00:09:05.151 "block_size": 512, 00:09:05.151 "num_blocks": 65536, 00:09:05.151 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:05.151 "assigned_rate_limits": { 00:09:05.151 "rw_ios_per_sec": 0, 00:09:05.151 "rw_mbytes_per_sec": 0, 00:09:05.151 "r_mbytes_per_sec": 0, 00:09:05.151 "w_mbytes_per_sec": 0 00:09:05.151 }, 00:09:05.151 "claimed": true, 00:09:05.151 "claim_type": "exclusive_write", 00:09:05.151 "zoned": false, 00:09:05.151 "supported_io_types": { 00:09:05.151 "read": true, 00:09:05.151 "write": true, 00:09:05.151 "unmap": true, 00:09:05.151 "flush": true, 00:09:05.151 "reset": true, 00:09:05.151 "nvme_admin": false, 00:09:05.151 "nvme_io": false, 00:09:05.151 "nvme_io_md": false, 00:09:05.151 "write_zeroes": true, 00:09:05.151 "zcopy": true, 00:09:05.151 "get_zone_info": false, 00:09:05.151 "zone_management": false, 00:09:05.151 "zone_append": false, 00:09:05.151 "compare": false, 00:09:05.151 "compare_and_write": false, 00:09:05.151 "abort": true, 00:09:05.151 "seek_hole": false, 00:09:05.151 "seek_data": false, 00:09:05.151 "copy": true, 00:09:05.151 "nvme_iov_md": false 00:09:05.151 }, 00:09:05.151 "memory_domains": [ 00:09:05.151 { 00:09:05.151 "dma_device_id": "system", 00:09:05.151 "dma_device_type": 1 00:09:05.151 }, 00:09:05.151 { 00:09:05.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.151 "dma_device_type": 2 00:09:05.151 } 00:09:05.151 ], 00:09:05.151 "driver_specific": {} 00:09:05.151 } 00:09:05.151 ] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.151 "name": "Existed_Raid", 00:09:05.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.151 "strip_size_kb": 64, 00:09:05.151 "state": "configuring", 00:09:05.151 "raid_level": "raid0", 00:09:05.151 "superblock": false, 00:09:05.151 "num_base_bdevs": 3, 00:09:05.151 "num_base_bdevs_discovered": 2, 00:09:05.151 "num_base_bdevs_operational": 3, 00:09:05.151 "base_bdevs_list": [ 00:09:05.151 { 00:09:05.151 "name": "BaseBdev1", 00:09:05.151 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:05.151 "is_configured": true, 00:09:05.151 "data_offset": 0, 00:09:05.151 "data_size": 65536 00:09:05.151 }, 00:09:05.151 { 00:09:05.151 "name": null, 00:09:05.151 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:05.151 "is_configured": false, 00:09:05.151 "data_offset": 0, 00:09:05.151 "data_size": 65536 00:09:05.151 }, 00:09:05.151 { 00:09:05.151 "name": "BaseBdev3", 00:09:05.151 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:05.151 "is_configured": true, 00:09:05.151 "data_offset": 0, 00:09:05.151 "data_size": 65536 00:09:05.151 } 00:09:05.151 ] 00:09:05.151 }' 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.151 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 [2024-11-20 17:43:32.650602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.721 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.722 "name": "Existed_Raid", 00:09:05.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.722 "strip_size_kb": 64, 00:09:05.722 "state": "configuring", 00:09:05.722 "raid_level": "raid0", 00:09:05.722 "superblock": false, 00:09:05.722 "num_base_bdevs": 3, 00:09:05.722 "num_base_bdevs_discovered": 1, 00:09:05.722 "num_base_bdevs_operational": 3, 00:09:05.722 "base_bdevs_list": [ 00:09:05.722 { 00:09:05.722 "name": "BaseBdev1", 00:09:05.722 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:05.722 "is_configured": true, 00:09:05.722 "data_offset": 0, 00:09:05.722 "data_size": 65536 00:09:05.722 }, 00:09:05.722 { 00:09:05.722 "name": null, 00:09:05.722 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:05.722 "is_configured": false, 00:09:05.722 "data_offset": 0, 00:09:05.722 "data_size": 65536 00:09:05.722 }, 00:09:05.722 { 00:09:05.722 "name": null, 00:09:05.722 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:05.722 "is_configured": false, 00:09:05.722 "data_offset": 0, 00:09:05.722 "data_size": 65536 00:09:05.722 } 00:09:05.722 ] 00:09:05.722 }' 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.722 17:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.982 [2024-11-20 17:43:33.125863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.982 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.242 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.242 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.242 "name": "Existed_Raid", 00:09:06.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.242 "strip_size_kb": 64, 00:09:06.242 "state": "configuring", 00:09:06.242 "raid_level": "raid0", 00:09:06.242 "superblock": false, 00:09:06.242 "num_base_bdevs": 3, 00:09:06.242 "num_base_bdevs_discovered": 2, 00:09:06.242 "num_base_bdevs_operational": 3, 00:09:06.242 "base_bdevs_list": [ 00:09:06.242 { 00:09:06.242 "name": "BaseBdev1", 00:09:06.242 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:06.242 "is_configured": true, 00:09:06.242 "data_offset": 0, 00:09:06.242 "data_size": 65536 00:09:06.242 }, 00:09:06.242 { 00:09:06.242 "name": null, 00:09:06.242 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:06.242 "is_configured": false, 00:09:06.242 "data_offset": 0, 00:09:06.242 "data_size": 65536 00:09:06.242 }, 00:09:06.242 { 00:09:06.242 "name": "BaseBdev3", 00:09:06.242 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:06.242 "is_configured": true, 00:09:06.242 "data_offset": 0, 00:09:06.242 "data_size": 65536 00:09:06.242 } 00:09:06.242 ] 00:09:06.242 }' 00:09:06.242 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.242 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.502 [2024-11-20 17:43:33.549152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.502 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.782 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.782 "name": "Existed_Raid", 00:09:06.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.782 "strip_size_kb": 64, 00:09:06.782 "state": "configuring", 00:09:06.782 "raid_level": "raid0", 00:09:06.782 "superblock": false, 00:09:06.782 "num_base_bdevs": 3, 00:09:06.782 "num_base_bdevs_discovered": 1, 00:09:06.782 "num_base_bdevs_operational": 3, 00:09:06.782 "base_bdevs_list": [ 00:09:06.782 { 00:09:06.782 "name": null, 00:09:06.782 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:06.782 "is_configured": false, 00:09:06.782 "data_offset": 0, 00:09:06.782 "data_size": 65536 00:09:06.782 }, 00:09:06.782 { 00:09:06.782 "name": null, 00:09:06.782 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:06.782 "is_configured": false, 00:09:06.782 "data_offset": 0, 00:09:06.782 "data_size": 65536 00:09:06.782 }, 00:09:06.782 { 00:09:06.782 "name": "BaseBdev3", 00:09:06.782 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:06.782 "is_configured": true, 00:09:06.782 "data_offset": 0, 00:09:06.782 "data_size": 65536 00:09:06.782 } 00:09:06.782 ] 00:09:06.782 }' 00:09:06.782 17:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.782 17:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.042 [2024-11-20 17:43:34.150415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.042 "name": "Existed_Raid", 00:09:07.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.042 "strip_size_kb": 64, 00:09:07.042 "state": "configuring", 00:09:07.042 "raid_level": "raid0", 00:09:07.042 "superblock": false, 00:09:07.042 "num_base_bdevs": 3, 00:09:07.042 "num_base_bdevs_discovered": 2, 00:09:07.042 "num_base_bdevs_operational": 3, 00:09:07.042 "base_bdevs_list": [ 00:09:07.042 { 00:09:07.042 "name": null, 00:09:07.042 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:07.042 "is_configured": false, 00:09:07.042 "data_offset": 0, 00:09:07.042 "data_size": 65536 00:09:07.042 }, 00:09:07.042 { 00:09:07.042 "name": "BaseBdev2", 00:09:07.042 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:07.042 "is_configured": true, 00:09:07.042 "data_offset": 0, 00:09:07.042 "data_size": 65536 00:09:07.042 }, 00:09:07.042 { 00:09:07.042 "name": "BaseBdev3", 00:09:07.042 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:07.042 "is_configured": true, 00:09:07.042 "data_offset": 0, 00:09:07.042 "data_size": 65536 00:09:07.042 } 00:09:07.042 ] 00:09:07.042 }' 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.042 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0b18ab5c-a91b-40e5-9336-c55ccf8dabcd 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 [2024-11-20 17:43:34.733124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:07.644 [2024-11-20 17:43:34.733175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.644 [2024-11-20 17:43:34.733187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:07.644 [2024-11-20 17:43:34.733473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:07.644 [2024-11-20 17:43:34.733662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.644 [2024-11-20 17:43:34.733683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:07.644 [2024-11-20 17:43:34.733983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.644 NewBaseBdev 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 [ 00:09:07.644 { 00:09:07.644 "name": "NewBaseBdev", 00:09:07.644 "aliases": [ 00:09:07.644 "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd" 00:09:07.644 ], 00:09:07.644 "product_name": "Malloc disk", 00:09:07.644 "block_size": 512, 00:09:07.644 "num_blocks": 65536, 00:09:07.644 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:07.644 "assigned_rate_limits": { 00:09:07.644 "rw_ios_per_sec": 0, 00:09:07.644 "rw_mbytes_per_sec": 0, 00:09:07.644 "r_mbytes_per_sec": 0, 00:09:07.644 "w_mbytes_per_sec": 0 00:09:07.644 }, 00:09:07.644 "claimed": true, 00:09:07.644 "claim_type": "exclusive_write", 00:09:07.644 "zoned": false, 00:09:07.644 "supported_io_types": { 00:09:07.644 "read": true, 00:09:07.644 "write": true, 00:09:07.644 "unmap": true, 00:09:07.644 "flush": true, 00:09:07.644 "reset": true, 00:09:07.644 "nvme_admin": false, 00:09:07.644 "nvme_io": false, 00:09:07.644 "nvme_io_md": false, 00:09:07.644 "write_zeroes": true, 00:09:07.644 "zcopy": true, 00:09:07.644 "get_zone_info": false, 00:09:07.644 "zone_management": false, 00:09:07.644 "zone_append": false, 00:09:07.644 "compare": false, 00:09:07.644 "compare_and_write": false, 00:09:07.644 "abort": true, 00:09:07.644 "seek_hole": false, 00:09:07.644 "seek_data": false, 00:09:07.644 "copy": true, 00:09:07.644 "nvme_iov_md": false 00:09:07.644 }, 00:09:07.644 "memory_domains": [ 00:09:07.644 { 00:09:07.644 "dma_device_id": "system", 00:09:07.644 "dma_device_type": 1 00:09:07.644 }, 00:09:07.644 { 00:09:07.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.644 "dma_device_type": 2 00:09:07.644 } 00:09:07.644 ], 00:09:07.644 "driver_specific": {} 00:09:07.644 } 00:09:07.644 ] 00:09:07.644 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.645 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.905 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.905 "name": "Existed_Raid", 00:09:07.905 "uuid": "a5b64c00-4c83-4d77-b7f8-0dcc194ec9ab", 00:09:07.905 "strip_size_kb": 64, 00:09:07.905 "state": "online", 00:09:07.905 "raid_level": "raid0", 00:09:07.905 "superblock": false, 00:09:07.905 "num_base_bdevs": 3, 00:09:07.905 "num_base_bdevs_discovered": 3, 00:09:07.905 "num_base_bdevs_operational": 3, 00:09:07.905 "base_bdevs_list": [ 00:09:07.905 { 00:09:07.905 "name": "NewBaseBdev", 00:09:07.905 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:07.905 "is_configured": true, 00:09:07.905 "data_offset": 0, 00:09:07.905 "data_size": 65536 00:09:07.905 }, 00:09:07.905 { 00:09:07.905 "name": "BaseBdev2", 00:09:07.905 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:07.905 "is_configured": true, 00:09:07.905 "data_offset": 0, 00:09:07.905 "data_size": 65536 00:09:07.905 }, 00:09:07.905 { 00:09:07.905 "name": "BaseBdev3", 00:09:07.905 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:07.905 "is_configured": true, 00:09:07.905 "data_offset": 0, 00:09:07.905 "data_size": 65536 00:09:07.905 } 00:09:07.905 ] 00:09:07.905 }' 00:09:07.905 17:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.905 17:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.166 [2024-11-20 17:43:35.240723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.166 "name": "Existed_Raid", 00:09:08.166 "aliases": [ 00:09:08.166 "a5b64c00-4c83-4d77-b7f8-0dcc194ec9ab" 00:09:08.166 ], 00:09:08.166 "product_name": "Raid Volume", 00:09:08.166 "block_size": 512, 00:09:08.166 "num_blocks": 196608, 00:09:08.166 "uuid": "a5b64c00-4c83-4d77-b7f8-0dcc194ec9ab", 00:09:08.166 "assigned_rate_limits": { 00:09:08.166 "rw_ios_per_sec": 0, 00:09:08.166 "rw_mbytes_per_sec": 0, 00:09:08.166 "r_mbytes_per_sec": 0, 00:09:08.166 "w_mbytes_per_sec": 0 00:09:08.166 }, 00:09:08.166 "claimed": false, 00:09:08.166 "zoned": false, 00:09:08.166 "supported_io_types": { 00:09:08.166 "read": true, 00:09:08.166 "write": true, 00:09:08.166 "unmap": true, 00:09:08.166 "flush": true, 00:09:08.166 "reset": true, 00:09:08.166 "nvme_admin": false, 00:09:08.166 "nvme_io": false, 00:09:08.166 "nvme_io_md": false, 00:09:08.166 "write_zeroes": true, 00:09:08.166 "zcopy": false, 00:09:08.166 "get_zone_info": false, 00:09:08.166 "zone_management": false, 00:09:08.166 "zone_append": false, 00:09:08.166 "compare": false, 00:09:08.166 "compare_and_write": false, 00:09:08.166 "abort": false, 00:09:08.166 "seek_hole": false, 00:09:08.166 "seek_data": false, 00:09:08.166 "copy": false, 00:09:08.166 "nvme_iov_md": false 00:09:08.166 }, 00:09:08.166 "memory_domains": [ 00:09:08.166 { 00:09:08.166 "dma_device_id": "system", 00:09:08.166 "dma_device_type": 1 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.166 "dma_device_type": 2 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "dma_device_id": "system", 00:09:08.166 "dma_device_type": 1 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.166 "dma_device_type": 2 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "dma_device_id": "system", 00:09:08.166 "dma_device_type": 1 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.166 "dma_device_type": 2 00:09:08.166 } 00:09:08.166 ], 00:09:08.166 "driver_specific": { 00:09:08.166 "raid": { 00:09:08.166 "uuid": "a5b64c00-4c83-4d77-b7f8-0dcc194ec9ab", 00:09:08.166 "strip_size_kb": 64, 00:09:08.166 "state": "online", 00:09:08.166 "raid_level": "raid0", 00:09:08.166 "superblock": false, 00:09:08.166 "num_base_bdevs": 3, 00:09:08.166 "num_base_bdevs_discovered": 3, 00:09:08.166 "num_base_bdevs_operational": 3, 00:09:08.166 "base_bdevs_list": [ 00:09:08.166 { 00:09:08.166 "name": "NewBaseBdev", 00:09:08.166 "uuid": "0b18ab5c-a91b-40e5-9336-c55ccf8dabcd", 00:09:08.166 "is_configured": true, 00:09:08.166 "data_offset": 0, 00:09:08.166 "data_size": 65536 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "name": "BaseBdev2", 00:09:08.166 "uuid": "0d141d29-d5fb-43d7-bb1f-0792ad4740f4", 00:09:08.166 "is_configured": true, 00:09:08.166 "data_offset": 0, 00:09:08.166 "data_size": 65536 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "name": "BaseBdev3", 00:09:08.166 "uuid": "e6933495-c075-41cb-8398-f6f4545a1a86", 00:09:08.166 "is_configured": true, 00:09:08.166 "data_offset": 0, 00:09:08.166 "data_size": 65536 00:09:08.166 } 00:09:08.166 ] 00:09:08.166 } 00:09:08.166 } 00:09:08.166 }' 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:08.166 BaseBdev2 00:09:08.166 BaseBdev3' 00:09:08.166 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.427 [2024-11-20 17:43:35.487899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.427 [2024-11-20 17:43:35.487927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.427 [2024-11-20 17:43:35.488006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.427 [2024-11-20 17:43:35.488082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.427 [2024-11-20 17:43:35.488105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64203 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64203 ']' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64203 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64203 00:09:08.427 killing process with pid 64203 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64203' 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64203 00:09:08.427 [2024-11-20 17:43:35.536484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.427 17:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64203 00:09:08.686 [2024-11-20 17:43:35.854593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.068 ************************************ 00:09:10.068 END TEST raid_state_function_test 00:09:10.068 ************************************ 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:10.068 00:09:10.068 real 0m10.667s 00:09:10.068 user 0m16.950s 00:09:10.068 sys 0m1.798s 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.068 17:43:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:10.068 17:43:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.068 17:43:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.068 17:43:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.068 ************************************ 00:09:10.068 START TEST raid_state_function_test_sb 00:09:10.068 ************************************ 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64824 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64824' 00:09:10.068 Process raid pid: 64824 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64824 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64824 ']' 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.068 17:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.068 [2024-11-20 17:43:37.185663] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:10.068 [2024-11-20 17:43:37.185857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.328 [2024-11-20 17:43:37.363278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.328 [2024-11-20 17:43:37.477873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.604 [2024-11-20 17:43:37.677034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.605 [2024-11-20 17:43:37.677081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.863 [2024-11-20 17:43:38.029879] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.863 [2024-11-20 17:43:38.030028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.863 [2024-11-20 17:43:38.030055] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.863 [2024-11-20 17:43:38.030069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.863 [2024-11-20 17:43:38.030076] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.863 [2024-11-20 17:43:38.030086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.863 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.123 "name": "Existed_Raid", 00:09:11.123 "uuid": "a14c64a6-6f5e-4131-99aa-90cc5d5bb1ba", 00:09:11.123 "strip_size_kb": 64, 00:09:11.123 "state": "configuring", 00:09:11.123 "raid_level": "raid0", 00:09:11.123 "superblock": true, 00:09:11.123 "num_base_bdevs": 3, 00:09:11.123 "num_base_bdevs_discovered": 0, 00:09:11.123 "num_base_bdevs_operational": 3, 00:09:11.123 "base_bdevs_list": [ 00:09:11.123 { 00:09:11.123 "name": "BaseBdev1", 00:09:11.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.123 "is_configured": false, 00:09:11.123 "data_offset": 0, 00:09:11.123 "data_size": 0 00:09:11.123 }, 00:09:11.123 { 00:09:11.123 "name": "BaseBdev2", 00:09:11.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.123 "is_configured": false, 00:09:11.123 "data_offset": 0, 00:09:11.123 "data_size": 0 00:09:11.123 }, 00:09:11.123 { 00:09:11.123 "name": "BaseBdev3", 00:09:11.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.123 "is_configured": false, 00:09:11.123 "data_offset": 0, 00:09:11.123 "data_size": 0 00:09:11.123 } 00:09:11.123 ] 00:09:11.123 }' 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.123 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.383 [2024-11-20 17:43:38.469062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.383 [2024-11-20 17:43:38.469101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.383 [2024-11-20 17:43:38.481023] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.383 [2024-11-20 17:43:38.481064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.383 [2024-11-20 17:43:38.481073] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.383 [2024-11-20 17:43:38.481083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.383 [2024-11-20 17:43:38.481089] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.383 [2024-11-20 17:43:38.481098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.383 [2024-11-20 17:43:38.529301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.383 BaseBdev1 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.383 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.643 [ 00:09:11.643 { 00:09:11.643 "name": "BaseBdev1", 00:09:11.643 "aliases": [ 00:09:11.643 "2662320d-6cb1-4e76-88be-028c5e56978e" 00:09:11.643 ], 00:09:11.643 "product_name": "Malloc disk", 00:09:11.643 "block_size": 512, 00:09:11.643 "num_blocks": 65536, 00:09:11.643 "uuid": "2662320d-6cb1-4e76-88be-028c5e56978e", 00:09:11.643 "assigned_rate_limits": { 00:09:11.643 "rw_ios_per_sec": 0, 00:09:11.643 "rw_mbytes_per_sec": 0, 00:09:11.643 "r_mbytes_per_sec": 0, 00:09:11.643 "w_mbytes_per_sec": 0 00:09:11.643 }, 00:09:11.643 "claimed": true, 00:09:11.643 "claim_type": "exclusive_write", 00:09:11.643 "zoned": false, 00:09:11.643 "supported_io_types": { 00:09:11.643 "read": true, 00:09:11.643 "write": true, 00:09:11.643 "unmap": true, 00:09:11.643 "flush": true, 00:09:11.643 "reset": true, 00:09:11.643 "nvme_admin": false, 00:09:11.643 "nvme_io": false, 00:09:11.643 "nvme_io_md": false, 00:09:11.643 "write_zeroes": true, 00:09:11.643 "zcopy": true, 00:09:11.643 "get_zone_info": false, 00:09:11.643 "zone_management": false, 00:09:11.643 "zone_append": false, 00:09:11.643 "compare": false, 00:09:11.643 "compare_and_write": false, 00:09:11.643 "abort": true, 00:09:11.643 "seek_hole": false, 00:09:11.643 "seek_data": false, 00:09:11.643 "copy": true, 00:09:11.643 "nvme_iov_md": false 00:09:11.643 }, 00:09:11.643 "memory_domains": [ 00:09:11.643 { 00:09:11.643 "dma_device_id": "system", 00:09:11.643 "dma_device_type": 1 00:09:11.643 }, 00:09:11.643 { 00:09:11.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.643 "dma_device_type": 2 00:09:11.643 } 00:09:11.643 ], 00:09:11.643 "driver_specific": {} 00:09:11.643 } 00:09:11.643 ] 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.643 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.643 "name": "Existed_Raid", 00:09:11.643 "uuid": "43b3d44e-d243-4009-9b37-a7f09d3d4f8f", 00:09:11.643 "strip_size_kb": 64, 00:09:11.643 "state": "configuring", 00:09:11.643 "raid_level": "raid0", 00:09:11.643 "superblock": true, 00:09:11.643 "num_base_bdevs": 3, 00:09:11.643 "num_base_bdevs_discovered": 1, 00:09:11.644 "num_base_bdevs_operational": 3, 00:09:11.644 "base_bdevs_list": [ 00:09:11.644 { 00:09:11.644 "name": "BaseBdev1", 00:09:11.644 "uuid": "2662320d-6cb1-4e76-88be-028c5e56978e", 00:09:11.644 "is_configured": true, 00:09:11.644 "data_offset": 2048, 00:09:11.644 "data_size": 63488 00:09:11.644 }, 00:09:11.644 { 00:09:11.644 "name": "BaseBdev2", 00:09:11.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.644 "is_configured": false, 00:09:11.644 "data_offset": 0, 00:09:11.644 "data_size": 0 00:09:11.644 }, 00:09:11.644 { 00:09:11.644 "name": "BaseBdev3", 00:09:11.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.644 "is_configured": false, 00:09:11.644 "data_offset": 0, 00:09:11.644 "data_size": 0 00:09:11.644 } 00:09:11.644 ] 00:09:11.644 }' 00:09:11.644 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.644 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.903 [2024-11-20 17:43:38.980688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.903 [2024-11-20 17:43:38.980745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.903 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.903 [2024-11-20 17:43:38.988726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.903 [2024-11-20 17:43:38.990862] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.904 [2024-11-20 17:43:38.990960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.904 [2024-11-20 17:43:38.991027] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.904 [2024-11-20 17:43:38.991084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.904 17:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.904 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.904 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.904 "name": "Existed_Raid", 00:09:11.904 "uuid": "dd82d77a-8d57-48ce-809e-86e1f45d3db4", 00:09:11.904 "strip_size_kb": 64, 00:09:11.904 "state": "configuring", 00:09:11.904 "raid_level": "raid0", 00:09:11.904 "superblock": true, 00:09:11.904 "num_base_bdevs": 3, 00:09:11.904 "num_base_bdevs_discovered": 1, 00:09:11.904 "num_base_bdevs_operational": 3, 00:09:11.904 "base_bdevs_list": [ 00:09:11.904 { 00:09:11.904 "name": "BaseBdev1", 00:09:11.904 "uuid": "2662320d-6cb1-4e76-88be-028c5e56978e", 00:09:11.904 "is_configured": true, 00:09:11.904 "data_offset": 2048, 00:09:11.904 "data_size": 63488 00:09:11.904 }, 00:09:11.904 { 00:09:11.904 "name": "BaseBdev2", 00:09:11.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.904 "is_configured": false, 00:09:11.904 "data_offset": 0, 00:09:11.904 "data_size": 0 00:09:11.904 }, 00:09:11.904 { 00:09:11.904 "name": "BaseBdev3", 00:09:11.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.904 "is_configured": false, 00:09:11.904 "data_offset": 0, 00:09:11.904 "data_size": 0 00:09:11.904 } 00:09:11.904 ] 00:09:11.904 }' 00:09:11.904 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.904 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.473 BaseBdev2 00:09:12.473 [2024-11-20 17:43:39.482544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.473 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.473 [ 00:09:12.473 { 00:09:12.473 "name": "BaseBdev2", 00:09:12.473 "aliases": [ 00:09:12.473 "8306c97b-7275-4c3b-b64d-de5cee936675" 00:09:12.473 ], 00:09:12.473 "product_name": "Malloc disk", 00:09:12.473 "block_size": 512, 00:09:12.473 "num_blocks": 65536, 00:09:12.473 "uuid": "8306c97b-7275-4c3b-b64d-de5cee936675", 00:09:12.473 "assigned_rate_limits": { 00:09:12.473 "rw_ios_per_sec": 0, 00:09:12.473 "rw_mbytes_per_sec": 0, 00:09:12.473 "r_mbytes_per_sec": 0, 00:09:12.473 "w_mbytes_per_sec": 0 00:09:12.473 }, 00:09:12.473 "claimed": true, 00:09:12.473 "claim_type": "exclusive_write", 00:09:12.473 "zoned": false, 00:09:12.473 "supported_io_types": { 00:09:12.473 "read": true, 00:09:12.473 "write": true, 00:09:12.473 "unmap": true, 00:09:12.473 "flush": true, 00:09:12.473 "reset": true, 00:09:12.474 "nvme_admin": false, 00:09:12.474 "nvme_io": false, 00:09:12.474 "nvme_io_md": false, 00:09:12.474 "write_zeroes": true, 00:09:12.474 "zcopy": true, 00:09:12.474 "get_zone_info": false, 00:09:12.474 "zone_management": false, 00:09:12.474 "zone_append": false, 00:09:12.474 "compare": false, 00:09:12.474 "compare_and_write": false, 00:09:12.474 "abort": true, 00:09:12.474 "seek_hole": false, 00:09:12.474 "seek_data": false, 00:09:12.474 "copy": true, 00:09:12.474 "nvme_iov_md": false 00:09:12.474 }, 00:09:12.474 "memory_domains": [ 00:09:12.474 { 00:09:12.474 "dma_device_id": "system", 00:09:12.474 "dma_device_type": 1 00:09:12.474 }, 00:09:12.474 { 00:09:12.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.474 "dma_device_type": 2 00:09:12.474 } 00:09:12.474 ], 00:09:12.474 "driver_specific": {} 00:09:12.474 } 00:09:12.474 ] 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.474 "name": "Existed_Raid", 00:09:12.474 "uuid": "dd82d77a-8d57-48ce-809e-86e1f45d3db4", 00:09:12.474 "strip_size_kb": 64, 00:09:12.474 "state": "configuring", 00:09:12.474 "raid_level": "raid0", 00:09:12.474 "superblock": true, 00:09:12.474 "num_base_bdevs": 3, 00:09:12.474 "num_base_bdevs_discovered": 2, 00:09:12.474 "num_base_bdevs_operational": 3, 00:09:12.474 "base_bdevs_list": [ 00:09:12.474 { 00:09:12.474 "name": "BaseBdev1", 00:09:12.474 "uuid": "2662320d-6cb1-4e76-88be-028c5e56978e", 00:09:12.474 "is_configured": true, 00:09:12.474 "data_offset": 2048, 00:09:12.474 "data_size": 63488 00:09:12.474 }, 00:09:12.474 { 00:09:12.474 "name": "BaseBdev2", 00:09:12.474 "uuid": "8306c97b-7275-4c3b-b64d-de5cee936675", 00:09:12.474 "is_configured": true, 00:09:12.474 "data_offset": 2048, 00:09:12.474 "data_size": 63488 00:09:12.474 }, 00:09:12.474 { 00:09:12.474 "name": "BaseBdev3", 00:09:12.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.474 "is_configured": false, 00:09:12.474 "data_offset": 0, 00:09:12.474 "data_size": 0 00:09:12.474 } 00:09:12.474 ] 00:09:12.474 }' 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.474 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.042 [2024-11-20 17:43:39.974159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.042 [2024-11-20 17:43:39.974429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.042 [2024-11-20 17:43:39.974456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.042 [2024-11-20 17:43:39.974732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:13.042 BaseBdev3 00:09:13.042 [2024-11-20 17:43:39.974917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.042 [2024-11-20 17:43:39.974934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:13.042 [2024-11-20 17:43:39.975120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.042 [ 00:09:13.042 { 00:09:13.042 "name": "BaseBdev3", 00:09:13.042 "aliases": [ 00:09:13.042 "2312d80a-c8be-48e9-9ca7-bc51cb66ab70" 00:09:13.042 ], 00:09:13.042 "product_name": "Malloc disk", 00:09:13.042 "block_size": 512, 00:09:13.042 "num_blocks": 65536, 00:09:13.042 "uuid": "2312d80a-c8be-48e9-9ca7-bc51cb66ab70", 00:09:13.042 "assigned_rate_limits": { 00:09:13.042 "rw_ios_per_sec": 0, 00:09:13.042 "rw_mbytes_per_sec": 0, 00:09:13.042 "r_mbytes_per_sec": 0, 00:09:13.042 "w_mbytes_per_sec": 0 00:09:13.042 }, 00:09:13.042 "claimed": true, 00:09:13.042 "claim_type": "exclusive_write", 00:09:13.042 "zoned": false, 00:09:13.042 "supported_io_types": { 00:09:13.042 "read": true, 00:09:13.042 "write": true, 00:09:13.042 "unmap": true, 00:09:13.042 "flush": true, 00:09:13.042 "reset": true, 00:09:13.042 "nvme_admin": false, 00:09:13.042 "nvme_io": false, 00:09:13.042 "nvme_io_md": false, 00:09:13.042 "write_zeroes": true, 00:09:13.042 "zcopy": true, 00:09:13.042 "get_zone_info": false, 00:09:13.042 "zone_management": false, 00:09:13.042 "zone_append": false, 00:09:13.042 "compare": false, 00:09:13.042 "compare_and_write": false, 00:09:13.042 "abort": true, 00:09:13.042 "seek_hole": false, 00:09:13.042 "seek_data": false, 00:09:13.042 "copy": true, 00:09:13.042 "nvme_iov_md": false 00:09:13.042 }, 00:09:13.042 "memory_domains": [ 00:09:13.042 { 00:09:13.042 "dma_device_id": "system", 00:09:13.042 "dma_device_type": 1 00:09:13.042 }, 00:09:13.042 { 00:09:13.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.042 "dma_device_type": 2 00:09:13.042 } 00:09:13.042 ], 00:09:13.042 "driver_specific": {} 00:09:13.042 } 00:09:13.042 ] 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.042 17:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.042 "name": "Existed_Raid", 00:09:13.042 "uuid": "dd82d77a-8d57-48ce-809e-86e1f45d3db4", 00:09:13.042 "strip_size_kb": 64, 00:09:13.042 "state": "online", 00:09:13.042 "raid_level": "raid0", 00:09:13.042 "superblock": true, 00:09:13.042 "num_base_bdevs": 3, 00:09:13.042 "num_base_bdevs_discovered": 3, 00:09:13.042 "num_base_bdevs_operational": 3, 00:09:13.042 "base_bdevs_list": [ 00:09:13.042 { 00:09:13.042 "name": "BaseBdev1", 00:09:13.042 "uuid": "2662320d-6cb1-4e76-88be-028c5e56978e", 00:09:13.042 "is_configured": true, 00:09:13.042 "data_offset": 2048, 00:09:13.042 "data_size": 63488 00:09:13.042 }, 00:09:13.042 { 00:09:13.042 "name": "BaseBdev2", 00:09:13.042 "uuid": "8306c97b-7275-4c3b-b64d-de5cee936675", 00:09:13.042 "is_configured": true, 00:09:13.042 "data_offset": 2048, 00:09:13.042 "data_size": 63488 00:09:13.042 }, 00:09:13.042 { 00:09:13.042 "name": "BaseBdev3", 00:09:13.042 "uuid": "2312d80a-c8be-48e9-9ca7-bc51cb66ab70", 00:09:13.042 "is_configured": true, 00:09:13.042 "data_offset": 2048, 00:09:13.042 "data_size": 63488 00:09:13.042 } 00:09:13.042 ] 00:09:13.042 }' 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.042 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.301 [2024-11-20 17:43:40.405847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.301 "name": "Existed_Raid", 00:09:13.301 "aliases": [ 00:09:13.301 "dd82d77a-8d57-48ce-809e-86e1f45d3db4" 00:09:13.301 ], 00:09:13.301 "product_name": "Raid Volume", 00:09:13.301 "block_size": 512, 00:09:13.301 "num_blocks": 190464, 00:09:13.301 "uuid": "dd82d77a-8d57-48ce-809e-86e1f45d3db4", 00:09:13.301 "assigned_rate_limits": { 00:09:13.301 "rw_ios_per_sec": 0, 00:09:13.301 "rw_mbytes_per_sec": 0, 00:09:13.301 "r_mbytes_per_sec": 0, 00:09:13.301 "w_mbytes_per_sec": 0 00:09:13.301 }, 00:09:13.301 "claimed": false, 00:09:13.301 "zoned": false, 00:09:13.301 "supported_io_types": { 00:09:13.301 "read": true, 00:09:13.301 "write": true, 00:09:13.301 "unmap": true, 00:09:13.301 "flush": true, 00:09:13.301 "reset": true, 00:09:13.301 "nvme_admin": false, 00:09:13.301 "nvme_io": false, 00:09:13.301 "nvme_io_md": false, 00:09:13.301 "write_zeroes": true, 00:09:13.301 "zcopy": false, 00:09:13.301 "get_zone_info": false, 00:09:13.301 "zone_management": false, 00:09:13.301 "zone_append": false, 00:09:13.301 "compare": false, 00:09:13.301 "compare_and_write": false, 00:09:13.301 "abort": false, 00:09:13.301 "seek_hole": false, 00:09:13.301 "seek_data": false, 00:09:13.301 "copy": false, 00:09:13.301 "nvme_iov_md": false 00:09:13.301 }, 00:09:13.301 "memory_domains": [ 00:09:13.301 { 00:09:13.301 "dma_device_id": "system", 00:09:13.301 "dma_device_type": 1 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.301 "dma_device_type": 2 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "dma_device_id": "system", 00:09:13.301 "dma_device_type": 1 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.301 "dma_device_type": 2 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "dma_device_id": "system", 00:09:13.301 "dma_device_type": 1 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.301 "dma_device_type": 2 00:09:13.301 } 00:09:13.301 ], 00:09:13.301 "driver_specific": { 00:09:13.301 "raid": { 00:09:13.301 "uuid": "dd82d77a-8d57-48ce-809e-86e1f45d3db4", 00:09:13.301 "strip_size_kb": 64, 00:09:13.301 "state": "online", 00:09:13.301 "raid_level": "raid0", 00:09:13.301 "superblock": true, 00:09:13.301 "num_base_bdevs": 3, 00:09:13.301 "num_base_bdevs_discovered": 3, 00:09:13.301 "num_base_bdevs_operational": 3, 00:09:13.301 "base_bdevs_list": [ 00:09:13.301 { 00:09:13.301 "name": "BaseBdev1", 00:09:13.301 "uuid": "2662320d-6cb1-4e76-88be-028c5e56978e", 00:09:13.301 "is_configured": true, 00:09:13.301 "data_offset": 2048, 00:09:13.301 "data_size": 63488 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "name": "BaseBdev2", 00:09:13.301 "uuid": "8306c97b-7275-4c3b-b64d-de5cee936675", 00:09:13.301 "is_configured": true, 00:09:13.301 "data_offset": 2048, 00:09:13.301 "data_size": 63488 00:09:13.301 }, 00:09:13.301 { 00:09:13.301 "name": "BaseBdev3", 00:09:13.301 "uuid": "2312d80a-c8be-48e9-9ca7-bc51cb66ab70", 00:09:13.301 "is_configured": true, 00:09:13.301 "data_offset": 2048, 00:09:13.301 "data_size": 63488 00:09:13.301 } 00:09:13.301 ] 00:09:13.301 } 00:09:13.301 } 00:09:13.301 }' 00:09:13.301 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:13.561 BaseBdev2 00:09:13.561 BaseBdev3' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.561 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.562 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.562 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.562 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.562 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.562 [2024-11-20 17:43:40.637203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.562 [2024-11-20 17:43:40.637236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.562 [2024-11-20 17:43:40.637295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.827 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.827 "name": "Existed_Raid", 00:09:13.827 "uuid": "dd82d77a-8d57-48ce-809e-86e1f45d3db4", 00:09:13.827 "strip_size_kb": 64, 00:09:13.827 "state": "offline", 00:09:13.827 "raid_level": "raid0", 00:09:13.827 "superblock": true, 00:09:13.827 "num_base_bdevs": 3, 00:09:13.827 "num_base_bdevs_discovered": 2, 00:09:13.827 "num_base_bdevs_operational": 2, 00:09:13.827 "base_bdevs_list": [ 00:09:13.827 { 00:09:13.827 "name": null, 00:09:13.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.827 "is_configured": false, 00:09:13.827 "data_offset": 0, 00:09:13.827 "data_size": 63488 00:09:13.827 }, 00:09:13.827 { 00:09:13.827 "name": "BaseBdev2", 00:09:13.827 "uuid": "8306c97b-7275-4c3b-b64d-de5cee936675", 00:09:13.827 "is_configured": true, 00:09:13.827 "data_offset": 2048, 00:09:13.827 "data_size": 63488 00:09:13.828 }, 00:09:13.828 { 00:09:13.828 "name": "BaseBdev3", 00:09:13.828 "uuid": "2312d80a-c8be-48e9-9ca7-bc51cb66ab70", 00:09:13.828 "is_configured": true, 00:09:13.828 "data_offset": 2048, 00:09:13.828 "data_size": 63488 00:09:13.828 } 00:09:13.828 ] 00:09:13.828 }' 00:09:13.828 17:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.828 17:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.087 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.087 [2024-11-20 17:43:41.208658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.346 [2024-11-20 17:43:41.374283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.346 [2024-11-20 17:43:41.374350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.346 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.605 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:14.605 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 BaseBdev2 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 [ 00:09:14.606 { 00:09:14.606 "name": "BaseBdev2", 00:09:14.606 "aliases": [ 00:09:14.606 "5676c048-65d7-4706-b60b-44c3f2ce4726" 00:09:14.606 ], 00:09:14.606 "product_name": "Malloc disk", 00:09:14.606 "block_size": 512, 00:09:14.606 "num_blocks": 65536, 00:09:14.606 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:14.606 "assigned_rate_limits": { 00:09:14.606 "rw_ios_per_sec": 0, 00:09:14.606 "rw_mbytes_per_sec": 0, 00:09:14.606 "r_mbytes_per_sec": 0, 00:09:14.606 "w_mbytes_per_sec": 0 00:09:14.606 }, 00:09:14.606 "claimed": false, 00:09:14.606 "zoned": false, 00:09:14.606 "supported_io_types": { 00:09:14.606 "read": true, 00:09:14.606 "write": true, 00:09:14.606 "unmap": true, 00:09:14.606 "flush": true, 00:09:14.606 "reset": true, 00:09:14.606 "nvme_admin": false, 00:09:14.606 "nvme_io": false, 00:09:14.606 "nvme_io_md": false, 00:09:14.606 "write_zeroes": true, 00:09:14.606 "zcopy": true, 00:09:14.606 "get_zone_info": false, 00:09:14.606 "zone_management": false, 00:09:14.606 "zone_append": false, 00:09:14.606 "compare": false, 00:09:14.606 "compare_and_write": false, 00:09:14.606 "abort": true, 00:09:14.606 "seek_hole": false, 00:09:14.606 "seek_data": false, 00:09:14.606 "copy": true, 00:09:14.606 "nvme_iov_md": false 00:09:14.606 }, 00:09:14.606 "memory_domains": [ 00:09:14.606 { 00:09:14.606 "dma_device_id": "system", 00:09:14.606 "dma_device_type": 1 00:09:14.606 }, 00:09:14.606 { 00:09:14.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.606 "dma_device_type": 2 00:09:14.606 } 00:09:14.606 ], 00:09:14.606 "driver_specific": {} 00:09:14.606 } 00:09:14.606 ] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 BaseBdev3 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 [ 00:09:14.606 { 00:09:14.606 "name": "BaseBdev3", 00:09:14.606 "aliases": [ 00:09:14.606 "30893f51-e323-48dd-bbd7-b4e9f62a9938" 00:09:14.606 ], 00:09:14.606 "product_name": "Malloc disk", 00:09:14.606 "block_size": 512, 00:09:14.606 "num_blocks": 65536, 00:09:14.606 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:14.606 "assigned_rate_limits": { 00:09:14.606 "rw_ios_per_sec": 0, 00:09:14.606 "rw_mbytes_per_sec": 0, 00:09:14.606 "r_mbytes_per_sec": 0, 00:09:14.606 "w_mbytes_per_sec": 0 00:09:14.606 }, 00:09:14.606 "claimed": false, 00:09:14.606 "zoned": false, 00:09:14.606 "supported_io_types": { 00:09:14.606 "read": true, 00:09:14.606 "write": true, 00:09:14.606 "unmap": true, 00:09:14.606 "flush": true, 00:09:14.606 "reset": true, 00:09:14.606 "nvme_admin": false, 00:09:14.606 "nvme_io": false, 00:09:14.606 "nvme_io_md": false, 00:09:14.606 "write_zeroes": true, 00:09:14.606 "zcopy": true, 00:09:14.606 "get_zone_info": false, 00:09:14.606 "zone_management": false, 00:09:14.606 "zone_append": false, 00:09:14.606 "compare": false, 00:09:14.606 "compare_and_write": false, 00:09:14.606 "abort": true, 00:09:14.606 "seek_hole": false, 00:09:14.606 "seek_data": false, 00:09:14.606 "copy": true, 00:09:14.606 "nvme_iov_md": false 00:09:14.606 }, 00:09:14.606 "memory_domains": [ 00:09:14.606 { 00:09:14.606 "dma_device_id": "system", 00:09:14.606 "dma_device_type": 1 00:09:14.606 }, 00:09:14.606 { 00:09:14.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.606 "dma_device_type": 2 00:09:14.606 } 00:09:14.606 ], 00:09:14.606 "driver_specific": {} 00:09:14.606 } 00:09:14.606 ] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 [2024-11-20 17:43:41.696343] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.606 [2024-11-20 17:43:41.696398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.606 [2024-11-20 17:43:41.696424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.606 [2024-11-20 17:43:41.698430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.606 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.607 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.607 "name": "Existed_Raid", 00:09:14.607 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:14.607 "strip_size_kb": 64, 00:09:14.607 "state": "configuring", 00:09:14.607 "raid_level": "raid0", 00:09:14.607 "superblock": true, 00:09:14.607 "num_base_bdevs": 3, 00:09:14.607 "num_base_bdevs_discovered": 2, 00:09:14.607 "num_base_bdevs_operational": 3, 00:09:14.607 "base_bdevs_list": [ 00:09:14.607 { 00:09:14.607 "name": "BaseBdev1", 00:09:14.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.607 "is_configured": false, 00:09:14.607 "data_offset": 0, 00:09:14.607 "data_size": 0 00:09:14.607 }, 00:09:14.607 { 00:09:14.607 "name": "BaseBdev2", 00:09:14.607 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:14.607 "is_configured": true, 00:09:14.607 "data_offset": 2048, 00:09:14.607 "data_size": 63488 00:09:14.607 }, 00:09:14.607 { 00:09:14.607 "name": "BaseBdev3", 00:09:14.607 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:14.607 "is_configured": true, 00:09:14.607 "data_offset": 2048, 00:09:14.607 "data_size": 63488 00:09:14.607 } 00:09:14.607 ] 00:09:14.607 }' 00:09:14.607 17:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.607 17:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.174 [2024-11-20 17:43:42.151654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.174 "name": "Existed_Raid", 00:09:15.174 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:15.174 "strip_size_kb": 64, 00:09:15.174 "state": "configuring", 00:09:15.174 "raid_level": "raid0", 00:09:15.174 "superblock": true, 00:09:15.174 "num_base_bdevs": 3, 00:09:15.174 "num_base_bdevs_discovered": 1, 00:09:15.174 "num_base_bdevs_operational": 3, 00:09:15.174 "base_bdevs_list": [ 00:09:15.174 { 00:09:15.174 "name": "BaseBdev1", 00:09:15.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.174 "is_configured": false, 00:09:15.174 "data_offset": 0, 00:09:15.174 "data_size": 0 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "name": null, 00:09:15.174 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:15.174 "is_configured": false, 00:09:15.174 "data_offset": 0, 00:09:15.174 "data_size": 63488 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "name": "BaseBdev3", 00:09:15.174 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:15.174 "is_configured": true, 00:09:15.174 "data_offset": 2048, 00:09:15.174 "data_size": 63488 00:09:15.174 } 00:09:15.174 ] 00:09:15.174 }' 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.174 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.435 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.435 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.435 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.435 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.435 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.695 [2024-11-20 17:43:42.670042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.695 BaseBdev1 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.695 [ 00:09:15.695 { 00:09:15.695 "name": "BaseBdev1", 00:09:15.695 "aliases": [ 00:09:15.695 "c153e1c5-b946-40ac-9d18-c212f7649bf9" 00:09:15.695 ], 00:09:15.695 "product_name": "Malloc disk", 00:09:15.695 "block_size": 512, 00:09:15.695 "num_blocks": 65536, 00:09:15.695 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:15.695 "assigned_rate_limits": { 00:09:15.695 "rw_ios_per_sec": 0, 00:09:15.695 "rw_mbytes_per_sec": 0, 00:09:15.695 "r_mbytes_per_sec": 0, 00:09:15.695 "w_mbytes_per_sec": 0 00:09:15.695 }, 00:09:15.695 "claimed": true, 00:09:15.695 "claim_type": "exclusive_write", 00:09:15.695 "zoned": false, 00:09:15.695 "supported_io_types": { 00:09:15.695 "read": true, 00:09:15.695 "write": true, 00:09:15.695 "unmap": true, 00:09:15.695 "flush": true, 00:09:15.695 "reset": true, 00:09:15.695 "nvme_admin": false, 00:09:15.695 "nvme_io": false, 00:09:15.695 "nvme_io_md": false, 00:09:15.695 "write_zeroes": true, 00:09:15.695 "zcopy": true, 00:09:15.695 "get_zone_info": false, 00:09:15.695 "zone_management": false, 00:09:15.695 "zone_append": false, 00:09:15.695 "compare": false, 00:09:15.695 "compare_and_write": false, 00:09:15.695 "abort": true, 00:09:15.695 "seek_hole": false, 00:09:15.695 "seek_data": false, 00:09:15.695 "copy": true, 00:09:15.695 "nvme_iov_md": false 00:09:15.695 }, 00:09:15.695 "memory_domains": [ 00:09:15.695 { 00:09:15.695 "dma_device_id": "system", 00:09:15.695 "dma_device_type": 1 00:09:15.695 }, 00:09:15.695 { 00:09:15.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.695 "dma_device_type": 2 00:09:15.695 } 00:09:15.695 ], 00:09:15.695 "driver_specific": {} 00:09:15.695 } 00:09:15.695 ] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.695 "name": "Existed_Raid", 00:09:15.695 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:15.695 "strip_size_kb": 64, 00:09:15.695 "state": "configuring", 00:09:15.695 "raid_level": "raid0", 00:09:15.695 "superblock": true, 00:09:15.695 "num_base_bdevs": 3, 00:09:15.695 "num_base_bdevs_discovered": 2, 00:09:15.695 "num_base_bdevs_operational": 3, 00:09:15.695 "base_bdevs_list": [ 00:09:15.695 { 00:09:15.695 "name": "BaseBdev1", 00:09:15.695 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:15.695 "is_configured": true, 00:09:15.695 "data_offset": 2048, 00:09:15.695 "data_size": 63488 00:09:15.695 }, 00:09:15.695 { 00:09:15.695 "name": null, 00:09:15.695 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:15.695 "is_configured": false, 00:09:15.695 "data_offset": 0, 00:09:15.695 "data_size": 63488 00:09:15.695 }, 00:09:15.695 { 00:09:15.695 "name": "BaseBdev3", 00:09:15.695 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:15.695 "is_configured": true, 00:09:15.695 "data_offset": 2048, 00:09:15.695 "data_size": 63488 00:09:15.695 } 00:09:15.695 ] 00:09:15.695 }' 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.695 17:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.263 [2024-11-20 17:43:43.189185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.263 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.264 "name": "Existed_Raid", 00:09:16.264 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:16.264 "strip_size_kb": 64, 00:09:16.264 "state": "configuring", 00:09:16.264 "raid_level": "raid0", 00:09:16.264 "superblock": true, 00:09:16.264 "num_base_bdevs": 3, 00:09:16.264 "num_base_bdevs_discovered": 1, 00:09:16.264 "num_base_bdevs_operational": 3, 00:09:16.264 "base_bdevs_list": [ 00:09:16.264 { 00:09:16.264 "name": "BaseBdev1", 00:09:16.264 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:16.264 "is_configured": true, 00:09:16.264 "data_offset": 2048, 00:09:16.264 "data_size": 63488 00:09:16.264 }, 00:09:16.264 { 00:09:16.264 "name": null, 00:09:16.264 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:16.264 "is_configured": false, 00:09:16.264 "data_offset": 0, 00:09:16.264 "data_size": 63488 00:09:16.264 }, 00:09:16.264 { 00:09:16.264 "name": null, 00:09:16.264 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:16.264 "is_configured": false, 00:09:16.264 "data_offset": 0, 00:09:16.264 "data_size": 63488 00:09:16.264 } 00:09:16.264 ] 00:09:16.264 }' 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.264 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.523 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.523 [2024-11-20 17:43:43.696706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.783 "name": "Existed_Raid", 00:09:16.783 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:16.783 "strip_size_kb": 64, 00:09:16.783 "state": "configuring", 00:09:16.783 "raid_level": "raid0", 00:09:16.783 "superblock": true, 00:09:16.783 "num_base_bdevs": 3, 00:09:16.783 "num_base_bdevs_discovered": 2, 00:09:16.783 "num_base_bdevs_operational": 3, 00:09:16.783 "base_bdevs_list": [ 00:09:16.783 { 00:09:16.783 "name": "BaseBdev1", 00:09:16.783 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:16.783 "is_configured": true, 00:09:16.783 "data_offset": 2048, 00:09:16.783 "data_size": 63488 00:09:16.783 }, 00:09:16.783 { 00:09:16.783 "name": null, 00:09:16.783 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:16.783 "is_configured": false, 00:09:16.783 "data_offset": 0, 00:09:16.783 "data_size": 63488 00:09:16.783 }, 00:09:16.783 { 00:09:16.783 "name": "BaseBdev3", 00:09:16.783 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:16.783 "is_configured": true, 00:09:16.783 "data_offset": 2048, 00:09:16.783 "data_size": 63488 00:09:16.783 } 00:09:16.783 ] 00:09:16.783 }' 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.783 17:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.041 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.041 [2024-11-20 17:43:44.199885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.300 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.300 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.300 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.300 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.300 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.300 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.301 "name": "Existed_Raid", 00:09:17.301 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:17.301 "strip_size_kb": 64, 00:09:17.301 "state": "configuring", 00:09:17.301 "raid_level": "raid0", 00:09:17.301 "superblock": true, 00:09:17.301 "num_base_bdevs": 3, 00:09:17.301 "num_base_bdevs_discovered": 1, 00:09:17.301 "num_base_bdevs_operational": 3, 00:09:17.301 "base_bdevs_list": [ 00:09:17.301 { 00:09:17.301 "name": null, 00:09:17.301 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:17.301 "is_configured": false, 00:09:17.301 "data_offset": 0, 00:09:17.301 "data_size": 63488 00:09:17.301 }, 00:09:17.301 { 00:09:17.301 "name": null, 00:09:17.301 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:17.301 "is_configured": false, 00:09:17.301 "data_offset": 0, 00:09:17.301 "data_size": 63488 00:09:17.301 }, 00:09:17.301 { 00:09:17.301 "name": "BaseBdev3", 00:09:17.301 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:17.301 "is_configured": true, 00:09:17.301 "data_offset": 2048, 00:09:17.301 "data_size": 63488 00:09:17.301 } 00:09:17.301 ] 00:09:17.301 }' 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.301 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.869 [2024-11-20 17:43:44.837471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.869 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.870 "name": "Existed_Raid", 00:09:17.870 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:17.870 "strip_size_kb": 64, 00:09:17.870 "state": "configuring", 00:09:17.870 "raid_level": "raid0", 00:09:17.870 "superblock": true, 00:09:17.870 "num_base_bdevs": 3, 00:09:17.870 "num_base_bdevs_discovered": 2, 00:09:17.870 "num_base_bdevs_operational": 3, 00:09:17.870 "base_bdevs_list": [ 00:09:17.870 { 00:09:17.870 "name": null, 00:09:17.870 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:17.870 "is_configured": false, 00:09:17.870 "data_offset": 0, 00:09:17.870 "data_size": 63488 00:09:17.870 }, 00:09:17.870 { 00:09:17.870 "name": "BaseBdev2", 00:09:17.870 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:17.870 "is_configured": true, 00:09:17.870 "data_offset": 2048, 00:09:17.870 "data_size": 63488 00:09:17.870 }, 00:09:17.870 { 00:09:17.870 "name": "BaseBdev3", 00:09:17.870 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:17.870 "is_configured": true, 00:09:17.870 "data_offset": 2048, 00:09:17.870 "data_size": 63488 00:09:17.870 } 00:09:17.870 ] 00:09:17.870 }' 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.870 17:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.131 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c153e1c5-b946-40ac-9d18-c212f7649bf9 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.391 [2024-11-20 17:43:45.366733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:18.391 [2024-11-20 17:43:45.367005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:18.391 [2024-11-20 17:43:45.367042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.391 [2024-11-20 17:43:45.367331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:18.391 NewBaseBdev 00:09:18.391 [2024-11-20 17:43:45.367510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:18.391 [2024-11-20 17:43:45.367523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:18.391 [2024-11-20 17:43:45.367682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.391 [ 00:09:18.391 { 00:09:18.391 "name": "NewBaseBdev", 00:09:18.391 "aliases": [ 00:09:18.391 "c153e1c5-b946-40ac-9d18-c212f7649bf9" 00:09:18.391 ], 00:09:18.391 "product_name": "Malloc disk", 00:09:18.391 "block_size": 512, 00:09:18.391 "num_blocks": 65536, 00:09:18.391 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:18.391 "assigned_rate_limits": { 00:09:18.391 "rw_ios_per_sec": 0, 00:09:18.391 "rw_mbytes_per_sec": 0, 00:09:18.391 "r_mbytes_per_sec": 0, 00:09:18.391 "w_mbytes_per_sec": 0 00:09:18.391 }, 00:09:18.391 "claimed": true, 00:09:18.391 "claim_type": "exclusive_write", 00:09:18.391 "zoned": false, 00:09:18.391 "supported_io_types": { 00:09:18.391 "read": true, 00:09:18.391 "write": true, 00:09:18.391 "unmap": true, 00:09:18.391 "flush": true, 00:09:18.391 "reset": true, 00:09:18.391 "nvme_admin": false, 00:09:18.391 "nvme_io": false, 00:09:18.391 "nvme_io_md": false, 00:09:18.391 "write_zeroes": true, 00:09:18.391 "zcopy": true, 00:09:18.391 "get_zone_info": false, 00:09:18.391 "zone_management": false, 00:09:18.391 "zone_append": false, 00:09:18.391 "compare": false, 00:09:18.391 "compare_and_write": false, 00:09:18.391 "abort": true, 00:09:18.391 "seek_hole": false, 00:09:18.391 "seek_data": false, 00:09:18.391 "copy": true, 00:09:18.391 "nvme_iov_md": false 00:09:18.391 }, 00:09:18.391 "memory_domains": [ 00:09:18.391 { 00:09:18.391 "dma_device_id": "system", 00:09:18.391 "dma_device_type": 1 00:09:18.391 }, 00:09:18.391 { 00:09:18.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.391 "dma_device_type": 2 00:09:18.391 } 00:09:18.391 ], 00:09:18.391 "driver_specific": {} 00:09:18.391 } 00:09:18.391 ] 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.391 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.392 "name": "Existed_Raid", 00:09:18.392 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:18.392 "strip_size_kb": 64, 00:09:18.392 "state": "online", 00:09:18.392 "raid_level": "raid0", 00:09:18.392 "superblock": true, 00:09:18.392 "num_base_bdevs": 3, 00:09:18.392 "num_base_bdevs_discovered": 3, 00:09:18.392 "num_base_bdevs_operational": 3, 00:09:18.392 "base_bdevs_list": [ 00:09:18.392 { 00:09:18.392 "name": "NewBaseBdev", 00:09:18.392 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:18.392 "is_configured": true, 00:09:18.392 "data_offset": 2048, 00:09:18.392 "data_size": 63488 00:09:18.392 }, 00:09:18.392 { 00:09:18.392 "name": "BaseBdev2", 00:09:18.392 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:18.392 "is_configured": true, 00:09:18.392 "data_offset": 2048, 00:09:18.392 "data_size": 63488 00:09:18.392 }, 00:09:18.392 { 00:09:18.392 "name": "BaseBdev3", 00:09:18.392 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:18.392 "is_configured": true, 00:09:18.392 "data_offset": 2048, 00:09:18.392 "data_size": 63488 00:09:18.392 } 00:09:18.392 ] 00:09:18.392 }' 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.392 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.651 [2024-11-20 17:43:45.798386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.651 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.910 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.910 "name": "Existed_Raid", 00:09:18.910 "aliases": [ 00:09:18.910 "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec" 00:09:18.910 ], 00:09:18.910 "product_name": "Raid Volume", 00:09:18.910 "block_size": 512, 00:09:18.910 "num_blocks": 190464, 00:09:18.910 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:18.910 "assigned_rate_limits": { 00:09:18.910 "rw_ios_per_sec": 0, 00:09:18.910 "rw_mbytes_per_sec": 0, 00:09:18.910 "r_mbytes_per_sec": 0, 00:09:18.910 "w_mbytes_per_sec": 0 00:09:18.910 }, 00:09:18.910 "claimed": false, 00:09:18.910 "zoned": false, 00:09:18.910 "supported_io_types": { 00:09:18.910 "read": true, 00:09:18.910 "write": true, 00:09:18.910 "unmap": true, 00:09:18.910 "flush": true, 00:09:18.910 "reset": true, 00:09:18.910 "nvme_admin": false, 00:09:18.910 "nvme_io": false, 00:09:18.910 "nvme_io_md": false, 00:09:18.910 "write_zeroes": true, 00:09:18.910 "zcopy": false, 00:09:18.910 "get_zone_info": false, 00:09:18.910 "zone_management": false, 00:09:18.910 "zone_append": false, 00:09:18.910 "compare": false, 00:09:18.910 "compare_and_write": false, 00:09:18.910 "abort": false, 00:09:18.910 "seek_hole": false, 00:09:18.910 "seek_data": false, 00:09:18.910 "copy": false, 00:09:18.910 "nvme_iov_md": false 00:09:18.910 }, 00:09:18.910 "memory_domains": [ 00:09:18.910 { 00:09:18.910 "dma_device_id": "system", 00:09:18.910 "dma_device_type": 1 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.910 "dma_device_type": 2 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "dma_device_id": "system", 00:09:18.910 "dma_device_type": 1 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.910 "dma_device_type": 2 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "dma_device_id": "system", 00:09:18.910 "dma_device_type": 1 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.910 "dma_device_type": 2 00:09:18.910 } 00:09:18.910 ], 00:09:18.910 "driver_specific": { 00:09:18.910 "raid": { 00:09:18.910 "uuid": "8b0159bd-ba50-4aed-b9a3-de4ed37d7bec", 00:09:18.910 "strip_size_kb": 64, 00:09:18.910 "state": "online", 00:09:18.910 "raid_level": "raid0", 00:09:18.910 "superblock": true, 00:09:18.910 "num_base_bdevs": 3, 00:09:18.910 "num_base_bdevs_discovered": 3, 00:09:18.910 "num_base_bdevs_operational": 3, 00:09:18.910 "base_bdevs_list": [ 00:09:18.910 { 00:09:18.910 "name": "NewBaseBdev", 00:09:18.910 "uuid": "c153e1c5-b946-40ac-9d18-c212f7649bf9", 00:09:18.910 "is_configured": true, 00:09:18.910 "data_offset": 2048, 00:09:18.910 "data_size": 63488 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "name": "BaseBdev2", 00:09:18.910 "uuid": "5676c048-65d7-4706-b60b-44c3f2ce4726", 00:09:18.910 "is_configured": true, 00:09:18.910 "data_offset": 2048, 00:09:18.910 "data_size": 63488 00:09:18.910 }, 00:09:18.910 { 00:09:18.910 "name": "BaseBdev3", 00:09:18.910 "uuid": "30893f51-e323-48dd-bbd7-b4e9f62a9938", 00:09:18.910 "is_configured": true, 00:09:18.910 "data_offset": 2048, 00:09:18.911 "data_size": 63488 00:09:18.911 } 00:09:18.911 ] 00:09:18.911 } 00:09:18.911 } 00:09:18.911 }' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:18.911 BaseBdev2 00:09:18.911 BaseBdev3' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.911 17:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.911 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.171 [2024-11-20 17:43:46.101558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.171 [2024-11-20 17:43:46.101605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.171 [2024-11-20 17:43:46.101710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.171 [2024-11-20 17:43:46.101781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.171 [2024-11-20 17:43:46.101800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64824 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64824 ']' 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64824 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64824 00:09:19.171 killing process with pid 64824 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64824' 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64824 00:09:19.171 [2024-11-20 17:43:46.143876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.171 17:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64824 00:09:19.429 [2024-11-20 17:43:46.497090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.832 17:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:20.832 00:09:20.832 real 0m10.675s 00:09:20.832 user 0m16.843s 00:09:20.832 sys 0m1.820s 00:09:20.832 17:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.832 17:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.832 ************************************ 00:09:20.832 END TEST raid_state_function_test_sb 00:09:20.832 ************************************ 00:09:20.832 17:43:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:20.832 17:43:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:20.832 17:43:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.832 17:43:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.832 ************************************ 00:09:20.832 START TEST raid_superblock_test 00:09:20.832 ************************************ 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65449 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65449 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65449 ']' 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.832 17:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.832 [2024-11-20 17:43:47.931933] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:20.832 [2024-11-20 17:43:47.932084] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65449 ] 00:09:21.092 [2024-11-20 17:43:48.110168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.092 [2024-11-20 17:43:48.253242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.352 [2024-11-20 17:43:48.497923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.352 [2024-11-20 17:43:48.497982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.611 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.870 malloc1 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.870 [2024-11-20 17:43:48.799889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:21.870 [2024-11-20 17:43:48.799981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.870 [2024-11-20 17:43:48.800019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:21.870 [2024-11-20 17:43:48.800031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.870 [2024-11-20 17:43:48.802576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.870 [2024-11-20 17:43:48.802614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:21.870 pt1 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.870 malloc2 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.870 [2024-11-20 17:43:48.858401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.870 [2024-11-20 17:43:48.858474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.870 [2024-11-20 17:43:48.858507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:21.870 [2024-11-20 17:43:48.858517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.870 [2024-11-20 17:43:48.861002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.870 [2024-11-20 17:43:48.861049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.870 pt2 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.870 malloc3 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.870 [2024-11-20 17:43:48.930763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:21.870 [2024-11-20 17:43:48.930842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.870 [2024-11-20 17:43:48.930869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:21.870 [2024-11-20 17:43:48.930879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.870 [2024-11-20 17:43:48.933610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.870 [2024-11-20 17:43:48.933653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:21.870 pt3 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.870 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.871 [2024-11-20 17:43:48.942809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:21.871 [2024-11-20 17:43:48.945080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.871 [2024-11-20 17:43:48.945160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:21.871 [2024-11-20 17:43:48.945359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:21.871 [2024-11-20 17:43:48.945380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.871 [2024-11-20 17:43:48.945697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.871 [2024-11-20 17:43:48.945892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:21.871 [2024-11-20 17:43:48.945906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:21.871 [2024-11-20 17:43:48.946134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.871 "name": "raid_bdev1", 00:09:21.871 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:21.871 "strip_size_kb": 64, 00:09:21.871 "state": "online", 00:09:21.871 "raid_level": "raid0", 00:09:21.871 "superblock": true, 00:09:21.871 "num_base_bdevs": 3, 00:09:21.871 "num_base_bdevs_discovered": 3, 00:09:21.871 "num_base_bdevs_operational": 3, 00:09:21.871 "base_bdevs_list": [ 00:09:21.871 { 00:09:21.871 "name": "pt1", 00:09:21.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.871 "is_configured": true, 00:09:21.871 "data_offset": 2048, 00:09:21.871 "data_size": 63488 00:09:21.871 }, 00:09:21.871 { 00:09:21.871 "name": "pt2", 00:09:21.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.871 "is_configured": true, 00:09:21.871 "data_offset": 2048, 00:09:21.871 "data_size": 63488 00:09:21.871 }, 00:09:21.871 { 00:09:21.871 "name": "pt3", 00:09:21.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.871 "is_configured": true, 00:09:21.871 "data_offset": 2048, 00:09:21.871 "data_size": 63488 00:09:21.871 } 00:09:21.871 ] 00:09:21.871 }' 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.871 17:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 [2024-11-20 17:43:49.342501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.435 "name": "raid_bdev1", 00:09:22.435 "aliases": [ 00:09:22.435 "035d0b49-74ff-4030-a3c6-72871f2f2fc6" 00:09:22.435 ], 00:09:22.435 "product_name": "Raid Volume", 00:09:22.435 "block_size": 512, 00:09:22.435 "num_blocks": 190464, 00:09:22.435 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:22.435 "assigned_rate_limits": { 00:09:22.435 "rw_ios_per_sec": 0, 00:09:22.435 "rw_mbytes_per_sec": 0, 00:09:22.435 "r_mbytes_per_sec": 0, 00:09:22.435 "w_mbytes_per_sec": 0 00:09:22.435 }, 00:09:22.435 "claimed": false, 00:09:22.435 "zoned": false, 00:09:22.435 "supported_io_types": { 00:09:22.435 "read": true, 00:09:22.435 "write": true, 00:09:22.435 "unmap": true, 00:09:22.435 "flush": true, 00:09:22.435 "reset": true, 00:09:22.435 "nvme_admin": false, 00:09:22.435 "nvme_io": false, 00:09:22.435 "nvme_io_md": false, 00:09:22.435 "write_zeroes": true, 00:09:22.435 "zcopy": false, 00:09:22.435 "get_zone_info": false, 00:09:22.435 "zone_management": false, 00:09:22.435 "zone_append": false, 00:09:22.435 "compare": false, 00:09:22.435 "compare_and_write": false, 00:09:22.435 "abort": false, 00:09:22.435 "seek_hole": false, 00:09:22.435 "seek_data": false, 00:09:22.435 "copy": false, 00:09:22.435 "nvme_iov_md": false 00:09:22.435 }, 00:09:22.435 "memory_domains": [ 00:09:22.435 { 00:09:22.435 "dma_device_id": "system", 00:09:22.435 "dma_device_type": 1 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.435 "dma_device_type": 2 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "dma_device_id": "system", 00:09:22.435 "dma_device_type": 1 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.435 "dma_device_type": 2 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "dma_device_id": "system", 00:09:22.435 "dma_device_type": 1 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.435 "dma_device_type": 2 00:09:22.435 } 00:09:22.435 ], 00:09:22.435 "driver_specific": { 00:09:22.435 "raid": { 00:09:22.435 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:22.435 "strip_size_kb": 64, 00:09:22.435 "state": "online", 00:09:22.435 "raid_level": "raid0", 00:09:22.435 "superblock": true, 00:09:22.435 "num_base_bdevs": 3, 00:09:22.435 "num_base_bdevs_discovered": 3, 00:09:22.435 "num_base_bdevs_operational": 3, 00:09:22.435 "base_bdevs_list": [ 00:09:22.435 { 00:09:22.435 "name": "pt1", 00:09:22.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.435 "is_configured": true, 00:09:22.435 "data_offset": 2048, 00:09:22.435 "data_size": 63488 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "name": "pt2", 00:09:22.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.435 "is_configured": true, 00:09:22.435 "data_offset": 2048, 00:09:22.435 "data_size": 63488 00:09:22.435 }, 00:09:22.435 { 00:09:22.435 "name": "pt3", 00:09:22.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.435 "is_configured": true, 00:09:22.435 "data_offset": 2048, 00:09:22.435 "data_size": 63488 00:09:22.435 } 00:09:22.435 ] 00:09:22.435 } 00:09:22.435 } 00:09:22.435 }' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:22.435 pt2 00:09:22.435 pt3' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.435 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.694 [2024-11-20 17:43:49.621948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=035d0b49-74ff-4030-a3c6-72871f2f2fc6 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 035d0b49-74ff-4030-a3c6-72871f2f2fc6 ']' 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.694 [2024-11-20 17:43:49.649562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.694 [2024-11-20 17:43:49.649613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.694 [2024-11-20 17:43:49.649735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.694 [2024-11-20 17:43:49.649815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.694 [2024-11-20 17:43:49.649831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.694 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 [2024-11-20 17:43:49.773454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:22.695 [2024-11-20 17:43:49.775756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:22.695 [2024-11-20 17:43:49.775817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:22.695 [2024-11-20 17:43:49.775879] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:22.695 [2024-11-20 17:43:49.775952] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:22.695 [2024-11-20 17:43:49.775973] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:22.695 [2024-11-20 17:43:49.775992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.695 [2024-11-20 17:43:49.776007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:22.695 request: 00:09:22.695 { 00:09:22.695 "name": "raid_bdev1", 00:09:22.695 "raid_level": "raid0", 00:09:22.695 "base_bdevs": [ 00:09:22.695 "malloc1", 00:09:22.695 "malloc2", 00:09:22.695 "malloc3" 00:09:22.695 ], 00:09:22.695 "strip_size_kb": 64, 00:09:22.695 "superblock": false, 00:09:22.695 "method": "bdev_raid_create", 00:09:22.695 "req_id": 1 00:09:22.695 } 00:09:22.695 Got JSON-RPC error response 00:09:22.695 response: 00:09:22.695 { 00:09:22.695 "code": -17, 00:09:22.695 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:22.695 } 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.695 [2024-11-20 17:43:49.849252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:22.695 [2024-11-20 17:43:49.849357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.695 [2024-11-20 17:43:49.849382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:22.695 [2024-11-20 17:43:49.849395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.695 [2024-11-20 17:43:49.852034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.695 [2024-11-20 17:43:49.852079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:22.695 [2024-11-20 17:43:49.852202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:22.695 [2024-11-20 17:43:49.852270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.695 pt1 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.695 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.956 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.956 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.956 "name": "raid_bdev1", 00:09:22.957 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:22.957 "strip_size_kb": 64, 00:09:22.957 "state": "configuring", 00:09:22.957 "raid_level": "raid0", 00:09:22.957 "superblock": true, 00:09:22.957 "num_base_bdevs": 3, 00:09:22.957 "num_base_bdevs_discovered": 1, 00:09:22.957 "num_base_bdevs_operational": 3, 00:09:22.957 "base_bdevs_list": [ 00:09:22.957 { 00:09:22.957 "name": "pt1", 00:09:22.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.957 "is_configured": true, 00:09:22.957 "data_offset": 2048, 00:09:22.957 "data_size": 63488 00:09:22.957 }, 00:09:22.957 { 00:09:22.957 "name": null, 00:09:22.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.957 "is_configured": false, 00:09:22.957 "data_offset": 2048, 00:09:22.957 "data_size": 63488 00:09:22.957 }, 00:09:22.957 { 00:09:22.957 "name": null, 00:09:22.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.957 "is_configured": false, 00:09:22.957 "data_offset": 2048, 00:09:22.957 "data_size": 63488 00:09:22.957 } 00:09:22.957 ] 00:09:22.957 }' 00:09:22.957 17:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.957 17:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.220 [2024-11-20 17:43:50.288629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.220 [2024-11-20 17:43:50.288740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.220 [2024-11-20 17:43:50.288776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:23.220 [2024-11-20 17:43:50.288787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.220 [2024-11-20 17:43:50.289359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.220 [2024-11-20 17:43:50.289385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.220 [2024-11-20 17:43:50.289500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:23.220 [2024-11-20 17:43:50.289543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.220 pt2 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.220 [2024-11-20 17:43:50.300641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.220 "name": "raid_bdev1", 00:09:23.220 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:23.220 "strip_size_kb": 64, 00:09:23.220 "state": "configuring", 00:09:23.220 "raid_level": "raid0", 00:09:23.220 "superblock": true, 00:09:23.220 "num_base_bdevs": 3, 00:09:23.220 "num_base_bdevs_discovered": 1, 00:09:23.220 "num_base_bdevs_operational": 3, 00:09:23.220 "base_bdevs_list": [ 00:09:23.220 { 00:09:23.220 "name": "pt1", 00:09:23.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.220 "is_configured": true, 00:09:23.220 "data_offset": 2048, 00:09:23.220 "data_size": 63488 00:09:23.220 }, 00:09:23.220 { 00:09:23.220 "name": null, 00:09:23.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.220 "is_configured": false, 00:09:23.220 "data_offset": 0, 00:09:23.220 "data_size": 63488 00:09:23.220 }, 00:09:23.220 { 00:09:23.220 "name": null, 00:09:23.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.220 "is_configured": false, 00:09:23.220 "data_offset": 2048, 00:09:23.220 "data_size": 63488 00:09:23.220 } 00:09:23.220 ] 00:09:23.220 }' 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.220 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.788 [2024-11-20 17:43:50.751895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.788 [2024-11-20 17:43:50.752007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.788 [2024-11-20 17:43:50.752054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:23.788 [2024-11-20 17:43:50.752068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.788 [2024-11-20 17:43:50.752668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.788 [2024-11-20 17:43:50.752705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.788 [2024-11-20 17:43:50.752819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:23.788 [2024-11-20 17:43:50.752857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.788 pt2 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.788 [2024-11-20 17:43:50.763865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:23.788 [2024-11-20 17:43:50.763950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.788 [2024-11-20 17:43:50.763971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:23.788 [2024-11-20 17:43:50.763983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.788 [2024-11-20 17:43:50.764537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.788 [2024-11-20 17:43:50.764583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:23.788 [2024-11-20 17:43:50.764720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:23.788 [2024-11-20 17:43:50.764770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:23.788 [2024-11-20 17:43:50.764915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.788 [2024-11-20 17:43:50.764931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:23.788 [2024-11-20 17:43:50.765231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:23.788 [2024-11-20 17:43:50.765402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.788 [2024-11-20 17:43:50.765416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:23.788 [2024-11-20 17:43:50.765584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.788 pt3 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.788 "name": "raid_bdev1", 00:09:23.788 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:23.788 "strip_size_kb": 64, 00:09:23.788 "state": "online", 00:09:23.788 "raid_level": "raid0", 00:09:23.788 "superblock": true, 00:09:23.788 "num_base_bdevs": 3, 00:09:23.788 "num_base_bdevs_discovered": 3, 00:09:23.788 "num_base_bdevs_operational": 3, 00:09:23.788 "base_bdevs_list": [ 00:09:23.788 { 00:09:23.788 "name": "pt1", 00:09:23.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.788 "is_configured": true, 00:09:23.788 "data_offset": 2048, 00:09:23.788 "data_size": 63488 00:09:23.788 }, 00:09:23.788 { 00:09:23.788 "name": "pt2", 00:09:23.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.788 "is_configured": true, 00:09:23.788 "data_offset": 2048, 00:09:23.788 "data_size": 63488 00:09:23.788 }, 00:09:23.788 { 00:09:23.788 "name": "pt3", 00:09:23.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.788 "is_configured": true, 00:09:23.788 "data_offset": 2048, 00:09:23.788 "data_size": 63488 00:09:23.788 } 00:09:23.788 ] 00:09:23.788 }' 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.788 17:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.047 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.047 [2024-11-20 17:43:51.211494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.306 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.306 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.306 "name": "raid_bdev1", 00:09:24.306 "aliases": [ 00:09:24.306 "035d0b49-74ff-4030-a3c6-72871f2f2fc6" 00:09:24.306 ], 00:09:24.306 "product_name": "Raid Volume", 00:09:24.306 "block_size": 512, 00:09:24.306 "num_blocks": 190464, 00:09:24.306 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:24.306 "assigned_rate_limits": { 00:09:24.306 "rw_ios_per_sec": 0, 00:09:24.306 "rw_mbytes_per_sec": 0, 00:09:24.306 "r_mbytes_per_sec": 0, 00:09:24.306 "w_mbytes_per_sec": 0 00:09:24.306 }, 00:09:24.306 "claimed": false, 00:09:24.306 "zoned": false, 00:09:24.306 "supported_io_types": { 00:09:24.306 "read": true, 00:09:24.306 "write": true, 00:09:24.306 "unmap": true, 00:09:24.306 "flush": true, 00:09:24.306 "reset": true, 00:09:24.306 "nvme_admin": false, 00:09:24.306 "nvme_io": false, 00:09:24.306 "nvme_io_md": false, 00:09:24.306 "write_zeroes": true, 00:09:24.306 "zcopy": false, 00:09:24.306 "get_zone_info": false, 00:09:24.306 "zone_management": false, 00:09:24.306 "zone_append": false, 00:09:24.306 "compare": false, 00:09:24.306 "compare_and_write": false, 00:09:24.306 "abort": false, 00:09:24.306 "seek_hole": false, 00:09:24.306 "seek_data": false, 00:09:24.306 "copy": false, 00:09:24.306 "nvme_iov_md": false 00:09:24.306 }, 00:09:24.306 "memory_domains": [ 00:09:24.307 { 00:09:24.307 "dma_device_id": "system", 00:09:24.307 "dma_device_type": 1 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.307 "dma_device_type": 2 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "dma_device_id": "system", 00:09:24.307 "dma_device_type": 1 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.307 "dma_device_type": 2 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "dma_device_id": "system", 00:09:24.307 "dma_device_type": 1 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.307 "dma_device_type": 2 00:09:24.307 } 00:09:24.307 ], 00:09:24.307 "driver_specific": { 00:09:24.307 "raid": { 00:09:24.307 "uuid": "035d0b49-74ff-4030-a3c6-72871f2f2fc6", 00:09:24.307 "strip_size_kb": 64, 00:09:24.307 "state": "online", 00:09:24.307 "raid_level": "raid0", 00:09:24.307 "superblock": true, 00:09:24.307 "num_base_bdevs": 3, 00:09:24.307 "num_base_bdevs_discovered": 3, 00:09:24.307 "num_base_bdevs_operational": 3, 00:09:24.307 "base_bdevs_list": [ 00:09:24.307 { 00:09:24.307 "name": "pt1", 00:09:24.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.307 "is_configured": true, 00:09:24.307 "data_offset": 2048, 00:09:24.307 "data_size": 63488 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "name": "pt2", 00:09:24.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.307 "is_configured": true, 00:09:24.307 "data_offset": 2048, 00:09:24.307 "data_size": 63488 00:09:24.307 }, 00:09:24.307 { 00:09:24.307 "name": "pt3", 00:09:24.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.307 "is_configured": true, 00:09:24.307 "data_offset": 2048, 00:09:24.307 "data_size": 63488 00:09:24.307 } 00:09:24.307 ] 00:09:24.307 } 00:09:24.307 } 00:09:24.307 }' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:24.307 pt2 00:09:24.307 pt3' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:24.307 [2024-11-20 17:43:51.455055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.307 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 035d0b49-74ff-4030-a3c6-72871f2f2fc6 '!=' 035d0b49-74ff-4030-a3c6-72871f2f2fc6 ']' 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65449 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65449 ']' 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65449 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65449 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.566 killing process with pid 65449 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65449' 00:09:24.566 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65449 00:09:24.566 [2024-11-20 17:43:51.527046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.567 17:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65449 00:09:24.567 [2024-11-20 17:43:51.527205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.567 [2024-11-20 17:43:51.527285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.567 [2024-11-20 17:43:51.527301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:24.826 [2024-11-20 17:43:51.871245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.207 17:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:26.207 00:09:26.207 real 0m5.424s 00:09:26.207 user 0m7.470s 00:09:26.207 sys 0m0.979s 00:09:26.207 17:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.207 17:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.207 ************************************ 00:09:26.207 END TEST raid_superblock_test 00:09:26.207 ************************************ 00:09:26.207 17:43:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:26.207 17:43:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.207 17:43:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.207 17:43:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.207 ************************************ 00:09:26.207 START TEST raid_read_error_test 00:09:26.207 ************************************ 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rp7eQXKnFf 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65703 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65703 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65703 ']' 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.207 17:43:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.467 [2024-11-20 17:43:53.433812] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:26.467 [2024-11-20 17:43:53.433949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65703 ] 00:09:26.467 [2024-11-20 17:43:53.610652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.727 [2024-11-20 17:43:53.768713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.987 [2024-11-20 17:43:54.046864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.987 [2024-11-20 17:43:54.046951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.246 BaseBdev1_malloc 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.246 true 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.246 [2024-11-20 17:43:54.345431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.246 [2024-11-20 17:43:54.345506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.246 [2024-11-20 17:43:54.345532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.246 [2024-11-20 17:43:54.345546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.246 [2024-11-20 17:43:54.348397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.246 [2024-11-20 17:43:54.348440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.246 BaseBdev1 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.246 BaseBdev2_malloc 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.246 true 00:09:27.246 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.506 [2024-11-20 17:43:54.426949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.506 [2024-11-20 17:43:54.427037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.506 [2024-11-20 17:43:54.427060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.506 [2024-11-20 17:43:54.427075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.506 [2024-11-20 17:43:54.429883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.506 [2024-11-20 17:43:54.429927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.506 BaseBdev2 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.506 BaseBdev3_malloc 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.506 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.507 true 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.507 [2024-11-20 17:43:54.517682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:27.507 [2024-11-20 17:43:54.517752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.507 [2024-11-20 17:43:54.517784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:27.507 [2024-11-20 17:43:54.517798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.507 [2024-11-20 17:43:54.520613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.507 [2024-11-20 17:43:54.520672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:27.507 BaseBdev3 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.507 [2024-11-20 17:43:54.525771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.507 [2024-11-20 17:43:54.528246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.507 [2024-11-20 17:43:54.528341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.507 [2024-11-20 17:43:54.528571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.507 [2024-11-20 17:43:54.528621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.507 [2024-11-20 17:43:54.528919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:27.507 [2024-11-20 17:43:54.529144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.507 [2024-11-20 17:43:54.529167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:27.507 [2024-11-20 17:43:54.529337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.507 "name": "raid_bdev1", 00:09:27.507 "uuid": "683210fa-9fb3-46fe-b5d8-aad5296be65d", 00:09:27.507 "strip_size_kb": 64, 00:09:27.507 "state": "online", 00:09:27.507 "raid_level": "raid0", 00:09:27.507 "superblock": true, 00:09:27.507 "num_base_bdevs": 3, 00:09:27.507 "num_base_bdevs_discovered": 3, 00:09:27.507 "num_base_bdevs_operational": 3, 00:09:27.507 "base_bdevs_list": [ 00:09:27.507 { 00:09:27.507 "name": "BaseBdev1", 00:09:27.507 "uuid": "9573f637-06c6-553c-afff-339bdb955f54", 00:09:27.507 "is_configured": true, 00:09:27.507 "data_offset": 2048, 00:09:27.507 "data_size": 63488 00:09:27.507 }, 00:09:27.507 { 00:09:27.507 "name": "BaseBdev2", 00:09:27.507 "uuid": "1e2b8432-30f4-5e0f-98d6-6059d8b5fa87", 00:09:27.507 "is_configured": true, 00:09:27.507 "data_offset": 2048, 00:09:27.507 "data_size": 63488 00:09:27.507 }, 00:09:27.507 { 00:09:27.507 "name": "BaseBdev3", 00:09:27.507 "uuid": "e0a43e11-2f92-53b2-add3-e4380965fa11", 00:09:27.507 "is_configured": true, 00:09:27.507 "data_offset": 2048, 00:09:27.507 "data_size": 63488 00:09:27.507 } 00:09:27.507 ] 00:09:27.507 }' 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.507 17:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.076 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.076 17:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.076 [2024-11-20 17:43:55.070716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.015 17:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.016 17:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.016 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.016 17:43:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.016 "name": "raid_bdev1", 00:09:29.016 "uuid": "683210fa-9fb3-46fe-b5d8-aad5296be65d", 00:09:29.016 "strip_size_kb": 64, 00:09:29.016 "state": "online", 00:09:29.016 "raid_level": "raid0", 00:09:29.016 "superblock": true, 00:09:29.016 "num_base_bdevs": 3, 00:09:29.016 "num_base_bdevs_discovered": 3, 00:09:29.016 "num_base_bdevs_operational": 3, 00:09:29.016 "base_bdevs_list": [ 00:09:29.016 { 00:09:29.016 "name": "BaseBdev1", 00:09:29.016 "uuid": "9573f637-06c6-553c-afff-339bdb955f54", 00:09:29.016 "is_configured": true, 00:09:29.016 "data_offset": 2048, 00:09:29.016 "data_size": 63488 00:09:29.016 }, 00:09:29.016 { 00:09:29.016 "name": "BaseBdev2", 00:09:29.016 "uuid": "1e2b8432-30f4-5e0f-98d6-6059d8b5fa87", 00:09:29.016 "is_configured": true, 00:09:29.016 "data_offset": 2048, 00:09:29.016 "data_size": 63488 00:09:29.016 }, 00:09:29.016 { 00:09:29.016 "name": "BaseBdev3", 00:09:29.016 "uuid": "e0a43e11-2f92-53b2-add3-e4380965fa11", 00:09:29.016 "is_configured": true, 00:09:29.016 "data_offset": 2048, 00:09:29.016 "data_size": 63488 00:09:29.016 } 00:09:29.016 ] 00:09:29.016 }' 00:09:29.016 17:43:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.016 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.275 17:43:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.275 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.275 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.537 [2024-11-20 17:43:56.453765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.537 [2024-11-20 17:43:56.453823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.537 [2024-11-20 17:43:56.457113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.537 [2024-11-20 17:43:56.457175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.537 [2024-11-20 17:43:56.457228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.537 [2024-11-20 17:43:56.457241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:29.537 { 00:09:29.537 "results": [ 00:09:29.537 { 00:09:29.537 "job": "raid_bdev1", 00:09:29.537 "core_mask": "0x1", 00:09:29.537 "workload": "randrw", 00:09:29.537 "percentage": 50, 00:09:29.537 "status": "finished", 00:09:29.537 "queue_depth": 1, 00:09:29.537 "io_size": 131072, 00:09:29.537 "runtime": 1.383257, 00:09:29.537 "iops": 11604.495766151915, 00:09:29.537 "mibps": 1450.5619707689893, 00:09:29.537 "io_failed": 1, 00:09:29.537 "io_timeout": 0, 00:09:29.537 "avg_latency_us": 120.70276423321546, 00:09:29.537 "min_latency_us": 24.817467248908297, 00:09:29.537 "max_latency_us": 1824.419213973799 00:09:29.537 } 00:09:29.537 ], 00:09:29.537 "core_count": 1 00:09:29.537 } 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65703 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65703 ']' 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65703 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65703 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.537 killing process with pid 65703 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65703' 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65703 00:09:29.537 [2024-11-20 17:43:56.498685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.537 17:43:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65703 00:09:29.797 [2024-11-20 17:43:56.806816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rp7eQXKnFf 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:31.177 00:09:31.177 real 0m4.877s 00:09:31.177 user 0m5.655s 00:09:31.177 sys 0m0.650s 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.177 17:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.177 ************************************ 00:09:31.177 END TEST raid_read_error_test 00:09:31.177 ************************************ 00:09:31.177 17:43:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:31.177 17:43:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.177 17:43:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.177 17:43:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.177 ************************************ 00:09:31.177 START TEST raid_write_error_test 00:09:31.177 ************************************ 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rphlAjB9IL 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65849 00:09:31.177 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65849 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65849 ']' 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.178 17:43:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.437 [2024-11-20 17:43:58.364331] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:31.437 [2024-11-20 17:43:58.364453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65849 ] 00:09:31.437 [2024-11-20 17:43:58.542556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.697 [2024-11-20 17:43:58.686386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.957 [2024-11-20 17:43:58.929867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.957 [2024-11-20 17:43:58.929958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 BaseBdev1_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 true 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 [2024-11-20 17:43:59.287597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:32.217 [2024-11-20 17:43:59.287671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.217 [2024-11-20 17:43:59.287701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:32.217 [2024-11-20 17:43:59.287717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.217 [2024-11-20 17:43:59.290469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.217 [2024-11-20 17:43:59.290510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:32.217 BaseBdev1 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 BaseBdev2_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 true 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 [2024-11-20 17:43:59.369313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:32.217 [2024-11-20 17:43:59.369389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.217 [2024-11-20 17:43:59.369409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:32.217 [2024-11-20 17:43:59.369424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.217 [2024-11-20 17:43:59.372249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.217 [2024-11-20 17:43:59.372292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:32.217 BaseBdev2 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.217 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.477 BaseBdev3_malloc 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.477 true 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.477 [2024-11-20 17:43:59.462311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:32.477 [2024-11-20 17:43:59.462378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.477 [2024-11-20 17:43:59.462398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:32.477 [2024-11-20 17:43:59.462412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.477 [2024-11-20 17:43:59.465075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.477 [2024-11-20 17:43:59.465119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:32.477 BaseBdev3 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.477 [2024-11-20 17:43:59.474393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.477 [2024-11-20 17:43:59.476721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.477 [2024-11-20 17:43:59.476813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.477 [2024-11-20 17:43:59.477077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.477 [2024-11-20 17:43:59.477102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:32.477 [2024-11-20 17:43:59.477401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:32.477 [2024-11-20 17:43:59.477629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.477 [2024-11-20 17:43:59.477653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:32.477 [2024-11-20 17:43:59.477844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.477 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.478 "name": "raid_bdev1", 00:09:32.478 "uuid": "af3e0a8e-72da-4106-9327-2ccd99a58ae9", 00:09:32.478 "strip_size_kb": 64, 00:09:32.478 "state": "online", 00:09:32.478 "raid_level": "raid0", 00:09:32.478 "superblock": true, 00:09:32.478 "num_base_bdevs": 3, 00:09:32.478 "num_base_bdevs_discovered": 3, 00:09:32.478 "num_base_bdevs_operational": 3, 00:09:32.478 "base_bdevs_list": [ 00:09:32.478 { 00:09:32.478 "name": "BaseBdev1", 00:09:32.478 "uuid": "8308a56c-da75-51cd-99f8-40bed83ef1d3", 00:09:32.478 "is_configured": true, 00:09:32.478 "data_offset": 2048, 00:09:32.478 "data_size": 63488 00:09:32.478 }, 00:09:32.478 { 00:09:32.478 "name": "BaseBdev2", 00:09:32.478 "uuid": "ec012c2a-fce1-53a6-aa83-59f5068174aa", 00:09:32.478 "is_configured": true, 00:09:32.478 "data_offset": 2048, 00:09:32.478 "data_size": 63488 00:09:32.478 }, 00:09:32.478 { 00:09:32.478 "name": "BaseBdev3", 00:09:32.478 "uuid": "e0f285e3-83e7-5acb-bdf8-0fc03e72e24e", 00:09:32.478 "is_configured": true, 00:09:32.478 "data_offset": 2048, 00:09:32.478 "data_size": 63488 00:09:32.478 } 00:09:32.478 ] 00:09:32.478 }' 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.478 17:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.781 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:32.781 17:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:33.040 [2024-11-20 17:44:00.023198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.976 "name": "raid_bdev1", 00:09:33.976 "uuid": "af3e0a8e-72da-4106-9327-2ccd99a58ae9", 00:09:33.976 "strip_size_kb": 64, 00:09:33.976 "state": "online", 00:09:33.976 "raid_level": "raid0", 00:09:33.976 "superblock": true, 00:09:33.976 "num_base_bdevs": 3, 00:09:33.976 "num_base_bdevs_discovered": 3, 00:09:33.976 "num_base_bdevs_operational": 3, 00:09:33.976 "base_bdevs_list": [ 00:09:33.976 { 00:09:33.976 "name": "BaseBdev1", 00:09:33.976 "uuid": "8308a56c-da75-51cd-99f8-40bed83ef1d3", 00:09:33.976 "is_configured": true, 00:09:33.976 "data_offset": 2048, 00:09:33.976 "data_size": 63488 00:09:33.976 }, 00:09:33.976 { 00:09:33.976 "name": "BaseBdev2", 00:09:33.976 "uuid": "ec012c2a-fce1-53a6-aa83-59f5068174aa", 00:09:33.976 "is_configured": true, 00:09:33.976 "data_offset": 2048, 00:09:33.976 "data_size": 63488 00:09:33.976 }, 00:09:33.976 { 00:09:33.976 "name": "BaseBdev3", 00:09:33.976 "uuid": "e0f285e3-83e7-5acb-bdf8-0fc03e72e24e", 00:09:33.976 "is_configured": true, 00:09:33.976 "data_offset": 2048, 00:09:33.976 "data_size": 63488 00:09:33.976 } 00:09:33.976 ] 00:09:33.976 }' 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.976 17:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.236 [2024-11-20 17:44:01.365377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.236 [2024-11-20 17:44:01.365443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.236 [2024-11-20 17:44:01.368737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.236 [2024-11-20 17:44:01.368798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.236 [2024-11-20 17:44:01.368851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.236 [2024-11-20 17:44:01.368865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:34.236 { 00:09:34.236 "results": [ 00:09:34.236 { 00:09:34.236 "job": "raid_bdev1", 00:09:34.236 "core_mask": "0x1", 00:09:34.236 "workload": "randrw", 00:09:34.236 "percentage": 50, 00:09:34.236 "status": "finished", 00:09:34.236 "queue_depth": 1, 00:09:34.236 "io_size": 131072, 00:09:34.236 "runtime": 1.342342, 00:09:34.236 "iops": 11481.425746940795, 00:09:34.236 "mibps": 1435.1782183675994, 00:09:34.236 "io_failed": 1, 00:09:34.236 "io_timeout": 0, 00:09:34.236 "avg_latency_us": 121.97179752701243, 00:09:34.236 "min_latency_us": 29.512663755458515, 00:09:34.236 "max_latency_us": 1674.172925764192 00:09:34.236 } 00:09:34.236 ], 00:09:34.236 "core_count": 1 00:09:34.236 } 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65849 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65849 ']' 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65849 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65849 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.236 killing process with pid 65849 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65849' 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65849 00:09:34.236 [2024-11-20 17:44:01.399978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.236 17:44:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65849 00:09:34.802 [2024-11-20 17:44:01.707329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rphlAjB9IL 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:36.217 00:09:36.217 real 0m5.034s 00:09:36.217 user 0m5.764s 00:09:36.217 sys 0m0.654s 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.217 17:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.217 ************************************ 00:09:36.217 END TEST raid_write_error_test 00:09:36.217 ************************************ 00:09:36.217 17:44:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:36.217 17:44:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:36.217 17:44:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:36.217 17:44:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.217 17:44:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.217 ************************************ 00:09:36.217 START TEST raid_state_function_test 00:09:36.217 ************************************ 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65998 00:09:36.217 Process raid pid: 65998 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65998' 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65998 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65998 ']' 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.217 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.218 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.218 17:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.476 [2024-11-20 17:44:03.477230] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:36.476 [2024-11-20 17:44:03.477391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.734 [2024-11-20 17:44:03.666881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.734 [2024-11-20 17:44:03.828424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.993 [2024-11-20 17:44:04.083433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.993 [2024-11-20 17:44:04.083489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.252 [2024-11-20 17:44:04.299542] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.252 [2024-11-20 17:44:04.299617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.252 [2024-11-20 17:44:04.299629] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.252 [2024-11-20 17:44:04.299640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.252 [2024-11-20 17:44:04.299646] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.252 [2024-11-20 17:44:04.299657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.252 "name": "Existed_Raid", 00:09:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.252 "strip_size_kb": 64, 00:09:37.252 "state": "configuring", 00:09:37.252 "raid_level": "concat", 00:09:37.252 "superblock": false, 00:09:37.252 "num_base_bdevs": 3, 00:09:37.252 "num_base_bdevs_discovered": 0, 00:09:37.252 "num_base_bdevs_operational": 3, 00:09:37.252 "base_bdevs_list": [ 00:09:37.252 { 00:09:37.252 "name": "BaseBdev1", 00:09:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.252 "is_configured": false, 00:09:37.252 "data_offset": 0, 00:09:37.252 "data_size": 0 00:09:37.252 }, 00:09:37.252 { 00:09:37.252 "name": "BaseBdev2", 00:09:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.252 "is_configured": false, 00:09:37.252 "data_offset": 0, 00:09:37.252 "data_size": 0 00:09:37.252 }, 00:09:37.252 { 00:09:37.252 "name": "BaseBdev3", 00:09:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.252 "is_configured": false, 00:09:37.252 "data_offset": 0, 00:09:37.252 "data_size": 0 00:09:37.252 } 00:09:37.252 ] 00:09:37.252 }' 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.252 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 [2024-11-20 17:44:04.758773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.819 [2024-11-20 17:44:04.758836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 [2024-11-20 17:44:04.766718] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.819 [2024-11-20 17:44:04.766793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.819 [2024-11-20 17:44:04.766804] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.819 [2024-11-20 17:44:04.766816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.819 [2024-11-20 17:44:04.766824] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.819 [2024-11-20 17:44:04.766835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 [2024-11-20 17:44:04.816962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.819 BaseBdev1 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.819 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 [ 00:09:37.819 { 00:09:37.819 "name": "BaseBdev1", 00:09:37.819 "aliases": [ 00:09:37.820 "7bb47077-380d-4b73-a3cd-6c9003679cbd" 00:09:37.820 ], 00:09:37.820 "product_name": "Malloc disk", 00:09:37.820 "block_size": 512, 00:09:37.820 "num_blocks": 65536, 00:09:37.820 "uuid": "7bb47077-380d-4b73-a3cd-6c9003679cbd", 00:09:37.820 "assigned_rate_limits": { 00:09:37.820 "rw_ios_per_sec": 0, 00:09:37.820 "rw_mbytes_per_sec": 0, 00:09:37.820 "r_mbytes_per_sec": 0, 00:09:37.820 "w_mbytes_per_sec": 0 00:09:37.820 }, 00:09:37.820 "claimed": true, 00:09:37.820 "claim_type": "exclusive_write", 00:09:37.820 "zoned": false, 00:09:37.820 "supported_io_types": { 00:09:37.820 "read": true, 00:09:37.820 "write": true, 00:09:37.820 "unmap": true, 00:09:37.820 "flush": true, 00:09:37.820 "reset": true, 00:09:37.820 "nvme_admin": false, 00:09:37.820 "nvme_io": false, 00:09:37.820 "nvme_io_md": false, 00:09:37.820 "write_zeroes": true, 00:09:37.820 "zcopy": true, 00:09:37.820 "get_zone_info": false, 00:09:37.820 "zone_management": false, 00:09:37.820 "zone_append": false, 00:09:37.820 "compare": false, 00:09:37.820 "compare_and_write": false, 00:09:37.820 "abort": true, 00:09:37.820 "seek_hole": false, 00:09:37.820 "seek_data": false, 00:09:37.820 "copy": true, 00:09:37.820 "nvme_iov_md": false 00:09:37.820 }, 00:09:37.820 "memory_domains": [ 00:09:37.820 { 00:09:37.820 "dma_device_id": "system", 00:09:37.820 "dma_device_type": 1 00:09:37.820 }, 00:09:37.820 { 00:09:37.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.820 "dma_device_type": 2 00:09:37.820 } 00:09:37.820 ], 00:09:37.820 "driver_specific": {} 00:09:37.820 } 00:09:37.820 ] 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.820 "name": "Existed_Raid", 00:09:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.820 "strip_size_kb": 64, 00:09:37.820 "state": "configuring", 00:09:37.820 "raid_level": "concat", 00:09:37.820 "superblock": false, 00:09:37.820 "num_base_bdevs": 3, 00:09:37.820 "num_base_bdevs_discovered": 1, 00:09:37.820 "num_base_bdevs_operational": 3, 00:09:37.820 "base_bdevs_list": [ 00:09:37.820 { 00:09:37.820 "name": "BaseBdev1", 00:09:37.820 "uuid": "7bb47077-380d-4b73-a3cd-6c9003679cbd", 00:09:37.820 "is_configured": true, 00:09:37.820 "data_offset": 0, 00:09:37.820 "data_size": 65536 00:09:37.820 }, 00:09:37.820 { 00:09:37.820 "name": "BaseBdev2", 00:09:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.820 "is_configured": false, 00:09:37.820 "data_offset": 0, 00:09:37.820 "data_size": 0 00:09:37.820 }, 00:09:37.820 { 00:09:37.820 "name": "BaseBdev3", 00:09:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.820 "is_configured": false, 00:09:37.820 "data_offset": 0, 00:09:37.820 "data_size": 0 00:09:37.820 } 00:09:37.820 ] 00:09:37.820 }' 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.820 17:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.388 [2024-11-20 17:44:05.324203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.388 [2024-11-20 17:44:05.324293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.388 [2024-11-20 17:44:05.332212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.388 [2024-11-20 17:44:05.334485] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.388 [2024-11-20 17:44:05.334530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.388 [2024-11-20 17:44:05.334542] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.388 [2024-11-20 17:44:05.334551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.388 "name": "Existed_Raid", 00:09:38.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.388 "strip_size_kb": 64, 00:09:38.388 "state": "configuring", 00:09:38.388 "raid_level": "concat", 00:09:38.388 "superblock": false, 00:09:38.388 "num_base_bdevs": 3, 00:09:38.388 "num_base_bdevs_discovered": 1, 00:09:38.388 "num_base_bdevs_operational": 3, 00:09:38.388 "base_bdevs_list": [ 00:09:38.388 { 00:09:38.388 "name": "BaseBdev1", 00:09:38.388 "uuid": "7bb47077-380d-4b73-a3cd-6c9003679cbd", 00:09:38.388 "is_configured": true, 00:09:38.388 "data_offset": 0, 00:09:38.388 "data_size": 65536 00:09:38.388 }, 00:09:38.388 { 00:09:38.388 "name": "BaseBdev2", 00:09:38.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.388 "is_configured": false, 00:09:38.388 "data_offset": 0, 00:09:38.388 "data_size": 0 00:09:38.388 }, 00:09:38.388 { 00:09:38.388 "name": "BaseBdev3", 00:09:38.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.388 "is_configured": false, 00:09:38.388 "data_offset": 0, 00:09:38.388 "data_size": 0 00:09:38.388 } 00:09:38.388 ] 00:09:38.388 }' 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.388 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.649 [2024-11-20 17:44:05.789326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.649 BaseBdev2 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.649 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.649 [ 00:09:38.649 { 00:09:38.649 "name": "BaseBdev2", 00:09:38.649 "aliases": [ 00:09:38.649 "f3347d16-afe7-42f2-adf3-5cc38334a15d" 00:09:38.649 ], 00:09:38.649 "product_name": "Malloc disk", 00:09:38.649 "block_size": 512, 00:09:38.649 "num_blocks": 65536, 00:09:38.649 "uuid": "f3347d16-afe7-42f2-adf3-5cc38334a15d", 00:09:38.649 "assigned_rate_limits": { 00:09:38.649 "rw_ios_per_sec": 0, 00:09:38.649 "rw_mbytes_per_sec": 0, 00:09:38.649 "r_mbytes_per_sec": 0, 00:09:38.649 "w_mbytes_per_sec": 0 00:09:38.649 }, 00:09:38.649 "claimed": true, 00:09:38.649 "claim_type": "exclusive_write", 00:09:38.649 "zoned": false, 00:09:38.649 "supported_io_types": { 00:09:38.649 "read": true, 00:09:38.649 "write": true, 00:09:38.649 "unmap": true, 00:09:38.649 "flush": true, 00:09:38.649 "reset": true, 00:09:38.649 "nvme_admin": false, 00:09:38.649 "nvme_io": false, 00:09:38.649 "nvme_io_md": false, 00:09:38.649 "write_zeroes": true, 00:09:38.649 "zcopy": true, 00:09:38.649 "get_zone_info": false, 00:09:38.649 "zone_management": false, 00:09:38.649 "zone_append": false, 00:09:38.649 "compare": false, 00:09:38.649 "compare_and_write": false, 00:09:38.649 "abort": true, 00:09:38.649 "seek_hole": false, 00:09:38.649 "seek_data": false, 00:09:38.649 "copy": true, 00:09:38.649 "nvme_iov_md": false 00:09:38.649 }, 00:09:38.649 "memory_domains": [ 00:09:38.649 { 00:09:38.649 "dma_device_id": "system", 00:09:38.649 "dma_device_type": 1 00:09:38.649 }, 00:09:38.649 { 00:09:38.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.909 "dma_device_type": 2 00:09:38.909 } 00:09:38.909 ], 00:09:38.909 "driver_specific": {} 00:09:38.909 } 00:09:38.909 ] 00:09:38.909 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.909 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.910 "name": "Existed_Raid", 00:09:38.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.910 "strip_size_kb": 64, 00:09:38.910 "state": "configuring", 00:09:38.910 "raid_level": "concat", 00:09:38.910 "superblock": false, 00:09:38.910 "num_base_bdevs": 3, 00:09:38.910 "num_base_bdevs_discovered": 2, 00:09:38.910 "num_base_bdevs_operational": 3, 00:09:38.910 "base_bdevs_list": [ 00:09:38.910 { 00:09:38.910 "name": "BaseBdev1", 00:09:38.910 "uuid": "7bb47077-380d-4b73-a3cd-6c9003679cbd", 00:09:38.910 "is_configured": true, 00:09:38.910 "data_offset": 0, 00:09:38.910 "data_size": 65536 00:09:38.910 }, 00:09:38.910 { 00:09:38.910 "name": "BaseBdev2", 00:09:38.910 "uuid": "f3347d16-afe7-42f2-adf3-5cc38334a15d", 00:09:38.910 "is_configured": true, 00:09:38.910 "data_offset": 0, 00:09:38.910 "data_size": 65536 00:09:38.910 }, 00:09:38.910 { 00:09:38.910 "name": "BaseBdev3", 00:09:38.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.910 "is_configured": false, 00:09:38.910 "data_offset": 0, 00:09:38.910 "data_size": 0 00:09:38.910 } 00:09:38.910 ] 00:09:38.910 }' 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.910 17:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.169 [2024-11-20 17:44:06.319079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.169 [2024-11-20 17:44:06.319144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:39.169 [2024-11-20 17:44:06.319159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:39.169 [2024-11-20 17:44:06.319479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:39.169 [2024-11-20 17:44:06.319697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:39.169 [2024-11-20 17:44:06.319716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:39.169 [2024-11-20 17:44:06.320067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.169 BaseBdev3 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.169 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.429 [ 00:09:39.429 { 00:09:39.429 "name": "BaseBdev3", 00:09:39.429 "aliases": [ 00:09:39.429 "03c283ca-5373-4f5e-babe-e02a02aef55e" 00:09:39.429 ], 00:09:39.429 "product_name": "Malloc disk", 00:09:39.429 "block_size": 512, 00:09:39.429 "num_blocks": 65536, 00:09:39.429 "uuid": "03c283ca-5373-4f5e-babe-e02a02aef55e", 00:09:39.429 "assigned_rate_limits": { 00:09:39.429 "rw_ios_per_sec": 0, 00:09:39.429 "rw_mbytes_per_sec": 0, 00:09:39.429 "r_mbytes_per_sec": 0, 00:09:39.429 "w_mbytes_per_sec": 0 00:09:39.429 }, 00:09:39.429 "claimed": true, 00:09:39.429 "claim_type": "exclusive_write", 00:09:39.429 "zoned": false, 00:09:39.429 "supported_io_types": { 00:09:39.429 "read": true, 00:09:39.429 "write": true, 00:09:39.429 "unmap": true, 00:09:39.429 "flush": true, 00:09:39.429 "reset": true, 00:09:39.429 "nvme_admin": false, 00:09:39.429 "nvme_io": false, 00:09:39.429 "nvme_io_md": false, 00:09:39.429 "write_zeroes": true, 00:09:39.429 "zcopy": true, 00:09:39.429 "get_zone_info": false, 00:09:39.429 "zone_management": false, 00:09:39.429 "zone_append": false, 00:09:39.429 "compare": false, 00:09:39.429 "compare_and_write": false, 00:09:39.429 "abort": true, 00:09:39.429 "seek_hole": false, 00:09:39.429 "seek_data": false, 00:09:39.429 "copy": true, 00:09:39.429 "nvme_iov_md": false 00:09:39.429 }, 00:09:39.429 "memory_domains": [ 00:09:39.430 { 00:09:39.430 "dma_device_id": "system", 00:09:39.430 "dma_device_type": 1 00:09:39.430 }, 00:09:39.430 { 00:09:39.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.430 "dma_device_type": 2 00:09:39.430 } 00:09:39.430 ], 00:09:39.430 "driver_specific": {} 00:09:39.430 } 00:09:39.430 ] 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.430 "name": "Existed_Raid", 00:09:39.430 "uuid": "fdbeb881-7c0c-4e8b-b28f-f1b4e84e84ea", 00:09:39.430 "strip_size_kb": 64, 00:09:39.430 "state": "online", 00:09:39.430 "raid_level": "concat", 00:09:39.430 "superblock": false, 00:09:39.430 "num_base_bdevs": 3, 00:09:39.430 "num_base_bdevs_discovered": 3, 00:09:39.430 "num_base_bdevs_operational": 3, 00:09:39.430 "base_bdevs_list": [ 00:09:39.430 { 00:09:39.430 "name": "BaseBdev1", 00:09:39.430 "uuid": "7bb47077-380d-4b73-a3cd-6c9003679cbd", 00:09:39.430 "is_configured": true, 00:09:39.430 "data_offset": 0, 00:09:39.430 "data_size": 65536 00:09:39.430 }, 00:09:39.430 { 00:09:39.430 "name": "BaseBdev2", 00:09:39.430 "uuid": "f3347d16-afe7-42f2-adf3-5cc38334a15d", 00:09:39.430 "is_configured": true, 00:09:39.430 "data_offset": 0, 00:09:39.430 "data_size": 65536 00:09:39.430 }, 00:09:39.430 { 00:09:39.430 "name": "BaseBdev3", 00:09:39.430 "uuid": "03c283ca-5373-4f5e-babe-e02a02aef55e", 00:09:39.430 "is_configured": true, 00:09:39.430 "data_offset": 0, 00:09:39.430 "data_size": 65536 00:09:39.430 } 00:09:39.430 ] 00:09:39.430 }' 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.430 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.690 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.690 [2024-11-20 17:44:06.846611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.976 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.976 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.976 "name": "Existed_Raid", 00:09:39.976 "aliases": [ 00:09:39.976 "fdbeb881-7c0c-4e8b-b28f-f1b4e84e84ea" 00:09:39.976 ], 00:09:39.976 "product_name": "Raid Volume", 00:09:39.976 "block_size": 512, 00:09:39.976 "num_blocks": 196608, 00:09:39.976 "uuid": "fdbeb881-7c0c-4e8b-b28f-f1b4e84e84ea", 00:09:39.976 "assigned_rate_limits": { 00:09:39.976 "rw_ios_per_sec": 0, 00:09:39.976 "rw_mbytes_per_sec": 0, 00:09:39.976 "r_mbytes_per_sec": 0, 00:09:39.976 "w_mbytes_per_sec": 0 00:09:39.976 }, 00:09:39.976 "claimed": false, 00:09:39.976 "zoned": false, 00:09:39.976 "supported_io_types": { 00:09:39.976 "read": true, 00:09:39.976 "write": true, 00:09:39.976 "unmap": true, 00:09:39.976 "flush": true, 00:09:39.976 "reset": true, 00:09:39.976 "nvme_admin": false, 00:09:39.976 "nvme_io": false, 00:09:39.976 "nvme_io_md": false, 00:09:39.976 "write_zeroes": true, 00:09:39.976 "zcopy": false, 00:09:39.976 "get_zone_info": false, 00:09:39.976 "zone_management": false, 00:09:39.976 "zone_append": false, 00:09:39.976 "compare": false, 00:09:39.976 "compare_and_write": false, 00:09:39.976 "abort": false, 00:09:39.976 "seek_hole": false, 00:09:39.976 "seek_data": false, 00:09:39.976 "copy": false, 00:09:39.976 "nvme_iov_md": false 00:09:39.976 }, 00:09:39.976 "memory_domains": [ 00:09:39.976 { 00:09:39.976 "dma_device_id": "system", 00:09:39.976 "dma_device_type": 1 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.976 "dma_device_type": 2 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "dma_device_id": "system", 00:09:39.976 "dma_device_type": 1 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.976 "dma_device_type": 2 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "dma_device_id": "system", 00:09:39.976 "dma_device_type": 1 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.976 "dma_device_type": 2 00:09:39.976 } 00:09:39.976 ], 00:09:39.976 "driver_specific": { 00:09:39.976 "raid": { 00:09:39.976 "uuid": "fdbeb881-7c0c-4e8b-b28f-f1b4e84e84ea", 00:09:39.976 "strip_size_kb": 64, 00:09:39.976 "state": "online", 00:09:39.976 "raid_level": "concat", 00:09:39.976 "superblock": false, 00:09:39.976 "num_base_bdevs": 3, 00:09:39.976 "num_base_bdevs_discovered": 3, 00:09:39.976 "num_base_bdevs_operational": 3, 00:09:39.976 "base_bdevs_list": [ 00:09:39.976 { 00:09:39.976 "name": "BaseBdev1", 00:09:39.976 "uuid": "7bb47077-380d-4b73-a3cd-6c9003679cbd", 00:09:39.976 "is_configured": true, 00:09:39.976 "data_offset": 0, 00:09:39.976 "data_size": 65536 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "name": "BaseBdev2", 00:09:39.976 "uuid": "f3347d16-afe7-42f2-adf3-5cc38334a15d", 00:09:39.976 "is_configured": true, 00:09:39.976 "data_offset": 0, 00:09:39.976 "data_size": 65536 00:09:39.976 }, 00:09:39.976 { 00:09:39.976 "name": "BaseBdev3", 00:09:39.976 "uuid": "03c283ca-5373-4f5e-babe-e02a02aef55e", 00:09:39.976 "is_configured": true, 00:09:39.976 "data_offset": 0, 00:09:39.976 "data_size": 65536 00:09:39.976 } 00:09:39.976 ] 00:09:39.976 } 00:09:39.976 } 00:09:39.976 }' 00:09:39.976 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.976 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:39.977 BaseBdev2 00:09:39.977 BaseBdev3' 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 17:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.977 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 [2024-11-20 17:44:07.129782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.977 [2024-11-20 17:44:07.129830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.977 [2024-11-20 17:44:07.129894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.238 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.238 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.239 "name": "Existed_Raid", 00:09:40.239 "uuid": "fdbeb881-7c0c-4e8b-b28f-f1b4e84e84ea", 00:09:40.239 "strip_size_kb": 64, 00:09:40.239 "state": "offline", 00:09:40.239 "raid_level": "concat", 00:09:40.239 "superblock": false, 00:09:40.239 "num_base_bdevs": 3, 00:09:40.239 "num_base_bdevs_discovered": 2, 00:09:40.239 "num_base_bdevs_operational": 2, 00:09:40.239 "base_bdevs_list": [ 00:09:40.239 { 00:09:40.239 "name": null, 00:09:40.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.239 "is_configured": false, 00:09:40.239 "data_offset": 0, 00:09:40.239 "data_size": 65536 00:09:40.239 }, 00:09:40.239 { 00:09:40.239 "name": "BaseBdev2", 00:09:40.239 "uuid": "f3347d16-afe7-42f2-adf3-5cc38334a15d", 00:09:40.239 "is_configured": true, 00:09:40.239 "data_offset": 0, 00:09:40.239 "data_size": 65536 00:09:40.239 }, 00:09:40.239 { 00:09:40.239 "name": "BaseBdev3", 00:09:40.239 "uuid": "03c283ca-5373-4f5e-babe-e02a02aef55e", 00:09:40.239 "is_configured": true, 00:09:40.239 "data_offset": 0, 00:09:40.239 "data_size": 65536 00:09:40.239 } 00:09:40.239 ] 00:09:40.239 }' 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.239 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 [2024-11-20 17:44:07.745739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.807 17:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 [2024-11-20 17:44:07.910026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:40.807 [2024-11-20 17:44:07.910116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 BaseBdev2 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 [ 00:09:41.067 { 00:09:41.067 "name": "BaseBdev2", 00:09:41.067 "aliases": [ 00:09:41.067 "dee93ab9-c2fa-4646-8148-552e79b7378a" 00:09:41.067 ], 00:09:41.067 "product_name": "Malloc disk", 00:09:41.067 "block_size": 512, 00:09:41.067 "num_blocks": 65536, 00:09:41.067 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:41.067 "assigned_rate_limits": { 00:09:41.067 "rw_ios_per_sec": 0, 00:09:41.067 "rw_mbytes_per_sec": 0, 00:09:41.067 "r_mbytes_per_sec": 0, 00:09:41.067 "w_mbytes_per_sec": 0 00:09:41.067 }, 00:09:41.067 "claimed": false, 00:09:41.067 "zoned": false, 00:09:41.067 "supported_io_types": { 00:09:41.067 "read": true, 00:09:41.067 "write": true, 00:09:41.067 "unmap": true, 00:09:41.067 "flush": true, 00:09:41.067 "reset": true, 00:09:41.067 "nvme_admin": false, 00:09:41.067 "nvme_io": false, 00:09:41.067 "nvme_io_md": false, 00:09:41.067 "write_zeroes": true, 00:09:41.067 "zcopy": true, 00:09:41.067 "get_zone_info": false, 00:09:41.067 "zone_management": false, 00:09:41.067 "zone_append": false, 00:09:41.067 "compare": false, 00:09:41.067 "compare_and_write": false, 00:09:41.067 "abort": true, 00:09:41.067 "seek_hole": false, 00:09:41.067 "seek_data": false, 00:09:41.067 "copy": true, 00:09:41.067 "nvme_iov_md": false 00:09:41.067 }, 00:09:41.067 "memory_domains": [ 00:09:41.067 { 00:09:41.067 "dma_device_id": "system", 00:09:41.067 "dma_device_type": 1 00:09:41.067 }, 00:09:41.067 { 00:09:41.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.067 "dma_device_type": 2 00:09:41.067 } 00:09:41.067 ], 00:09:41.067 "driver_specific": {} 00:09:41.067 } 00:09:41.067 ] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 BaseBdev3 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.067 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 [ 00:09:41.067 { 00:09:41.068 "name": "BaseBdev3", 00:09:41.068 "aliases": [ 00:09:41.068 "ef57218b-a2ae-490a-afad-6b4b3b78101f" 00:09:41.068 ], 00:09:41.068 "product_name": "Malloc disk", 00:09:41.068 "block_size": 512, 00:09:41.068 "num_blocks": 65536, 00:09:41.068 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:41.068 "assigned_rate_limits": { 00:09:41.068 "rw_ios_per_sec": 0, 00:09:41.068 "rw_mbytes_per_sec": 0, 00:09:41.068 "r_mbytes_per_sec": 0, 00:09:41.068 "w_mbytes_per_sec": 0 00:09:41.068 }, 00:09:41.068 "claimed": false, 00:09:41.068 "zoned": false, 00:09:41.068 "supported_io_types": { 00:09:41.068 "read": true, 00:09:41.327 "write": true, 00:09:41.327 "unmap": true, 00:09:41.327 "flush": true, 00:09:41.327 "reset": true, 00:09:41.327 "nvme_admin": false, 00:09:41.327 "nvme_io": false, 00:09:41.327 "nvme_io_md": false, 00:09:41.327 "write_zeroes": true, 00:09:41.327 "zcopy": true, 00:09:41.327 "get_zone_info": false, 00:09:41.327 "zone_management": false, 00:09:41.327 "zone_append": false, 00:09:41.327 "compare": false, 00:09:41.327 "compare_and_write": false, 00:09:41.327 "abort": true, 00:09:41.327 "seek_hole": false, 00:09:41.327 "seek_data": false, 00:09:41.327 "copy": true, 00:09:41.327 "nvme_iov_md": false 00:09:41.327 }, 00:09:41.327 "memory_domains": [ 00:09:41.327 { 00:09:41.327 "dma_device_id": "system", 00:09:41.327 "dma_device_type": 1 00:09:41.327 }, 00:09:41.327 { 00:09:41.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.327 "dma_device_type": 2 00:09:41.327 } 00:09:41.327 ], 00:09:41.327 "driver_specific": {} 00:09:41.327 } 00:09:41.327 ] 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.327 [2024-11-20 17:44:08.256641] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.327 [2024-11-20 17:44:08.256836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.327 [2024-11-20 17:44:08.256895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.327 [2024-11-20 17:44:08.259249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.327 "name": "Existed_Raid", 00:09:41.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.327 "strip_size_kb": 64, 00:09:41.327 "state": "configuring", 00:09:41.327 "raid_level": "concat", 00:09:41.327 "superblock": false, 00:09:41.327 "num_base_bdevs": 3, 00:09:41.327 "num_base_bdevs_discovered": 2, 00:09:41.327 "num_base_bdevs_operational": 3, 00:09:41.327 "base_bdevs_list": [ 00:09:41.327 { 00:09:41.327 "name": "BaseBdev1", 00:09:41.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.327 "is_configured": false, 00:09:41.327 "data_offset": 0, 00:09:41.327 "data_size": 0 00:09:41.327 }, 00:09:41.327 { 00:09:41.327 "name": "BaseBdev2", 00:09:41.327 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:41.327 "is_configured": true, 00:09:41.327 "data_offset": 0, 00:09:41.327 "data_size": 65536 00:09:41.327 }, 00:09:41.327 { 00:09:41.327 "name": "BaseBdev3", 00:09:41.327 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:41.327 "is_configured": true, 00:09:41.327 "data_offset": 0, 00:09:41.327 "data_size": 65536 00:09:41.327 } 00:09:41.327 ] 00:09:41.327 }' 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.327 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.587 [2024-11-20 17:44:08.731907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.587 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.846 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.846 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.846 "name": "Existed_Raid", 00:09:41.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.846 "strip_size_kb": 64, 00:09:41.846 "state": "configuring", 00:09:41.846 "raid_level": "concat", 00:09:41.846 "superblock": false, 00:09:41.846 "num_base_bdevs": 3, 00:09:41.846 "num_base_bdevs_discovered": 1, 00:09:41.846 "num_base_bdevs_operational": 3, 00:09:41.846 "base_bdevs_list": [ 00:09:41.846 { 00:09:41.846 "name": "BaseBdev1", 00:09:41.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.846 "is_configured": false, 00:09:41.846 "data_offset": 0, 00:09:41.846 "data_size": 0 00:09:41.846 }, 00:09:41.846 { 00:09:41.846 "name": null, 00:09:41.846 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:41.846 "is_configured": false, 00:09:41.846 "data_offset": 0, 00:09:41.846 "data_size": 65536 00:09:41.846 }, 00:09:41.846 { 00:09:41.846 "name": "BaseBdev3", 00:09:41.846 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:41.846 "is_configured": true, 00:09:41.846 "data_offset": 0, 00:09:41.846 "data_size": 65536 00:09:41.846 } 00:09:41.846 ] 00:09:41.846 }' 00:09:41.846 17:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.846 17:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.119 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.119 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.120 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.382 BaseBdev1 00:09:42.382 [2024-11-20 17:44:09.308352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.382 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.383 [ 00:09:42.383 { 00:09:42.383 "name": "BaseBdev1", 00:09:42.383 "aliases": [ 00:09:42.383 "56dc03e8-f391-4162-b2bc-a10c1411d82a" 00:09:42.383 ], 00:09:42.383 "product_name": "Malloc disk", 00:09:42.383 "block_size": 512, 00:09:42.383 "num_blocks": 65536, 00:09:42.383 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:42.383 "assigned_rate_limits": { 00:09:42.383 "rw_ios_per_sec": 0, 00:09:42.383 "rw_mbytes_per_sec": 0, 00:09:42.383 "r_mbytes_per_sec": 0, 00:09:42.383 "w_mbytes_per_sec": 0 00:09:42.383 }, 00:09:42.383 "claimed": true, 00:09:42.383 "claim_type": "exclusive_write", 00:09:42.383 "zoned": false, 00:09:42.383 "supported_io_types": { 00:09:42.383 "read": true, 00:09:42.383 "write": true, 00:09:42.383 "unmap": true, 00:09:42.383 "flush": true, 00:09:42.383 "reset": true, 00:09:42.383 "nvme_admin": false, 00:09:42.383 "nvme_io": false, 00:09:42.383 "nvme_io_md": false, 00:09:42.383 "write_zeroes": true, 00:09:42.383 "zcopy": true, 00:09:42.383 "get_zone_info": false, 00:09:42.383 "zone_management": false, 00:09:42.383 "zone_append": false, 00:09:42.383 "compare": false, 00:09:42.383 "compare_and_write": false, 00:09:42.383 "abort": true, 00:09:42.383 "seek_hole": false, 00:09:42.383 "seek_data": false, 00:09:42.383 "copy": true, 00:09:42.383 "nvme_iov_md": false 00:09:42.383 }, 00:09:42.383 "memory_domains": [ 00:09:42.383 { 00:09:42.383 "dma_device_id": "system", 00:09:42.383 "dma_device_type": 1 00:09:42.383 }, 00:09:42.383 { 00:09:42.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.383 "dma_device_type": 2 00:09:42.383 } 00:09:42.383 ], 00:09:42.383 "driver_specific": {} 00:09:42.383 } 00:09:42.383 ] 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.383 "name": "Existed_Raid", 00:09:42.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.383 "strip_size_kb": 64, 00:09:42.383 "state": "configuring", 00:09:42.383 "raid_level": "concat", 00:09:42.383 "superblock": false, 00:09:42.383 "num_base_bdevs": 3, 00:09:42.383 "num_base_bdevs_discovered": 2, 00:09:42.383 "num_base_bdevs_operational": 3, 00:09:42.383 "base_bdevs_list": [ 00:09:42.383 { 00:09:42.383 "name": "BaseBdev1", 00:09:42.383 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:42.383 "is_configured": true, 00:09:42.383 "data_offset": 0, 00:09:42.383 "data_size": 65536 00:09:42.383 }, 00:09:42.383 { 00:09:42.383 "name": null, 00:09:42.383 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:42.383 "is_configured": false, 00:09:42.383 "data_offset": 0, 00:09:42.383 "data_size": 65536 00:09:42.383 }, 00:09:42.383 { 00:09:42.383 "name": "BaseBdev3", 00:09:42.383 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:42.383 "is_configured": true, 00:09:42.383 "data_offset": 0, 00:09:42.383 "data_size": 65536 00:09:42.383 } 00:09:42.383 ] 00:09:42.383 }' 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.383 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 [2024-11-20 17:44:09.807629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.643 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.904 "name": "Existed_Raid", 00:09:42.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.904 "strip_size_kb": 64, 00:09:42.904 "state": "configuring", 00:09:42.904 "raid_level": "concat", 00:09:42.904 "superblock": false, 00:09:42.904 "num_base_bdevs": 3, 00:09:42.904 "num_base_bdevs_discovered": 1, 00:09:42.904 "num_base_bdevs_operational": 3, 00:09:42.904 "base_bdevs_list": [ 00:09:42.904 { 00:09:42.904 "name": "BaseBdev1", 00:09:42.904 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:42.904 "is_configured": true, 00:09:42.904 "data_offset": 0, 00:09:42.904 "data_size": 65536 00:09:42.904 }, 00:09:42.904 { 00:09:42.904 "name": null, 00:09:42.904 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:42.904 "is_configured": false, 00:09:42.904 "data_offset": 0, 00:09:42.904 "data_size": 65536 00:09:42.904 }, 00:09:42.904 { 00:09:42.904 "name": null, 00:09:42.904 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:42.904 "is_configured": false, 00:09:42.904 "data_offset": 0, 00:09:42.904 "data_size": 65536 00:09:42.904 } 00:09:42.904 ] 00:09:42.904 }' 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.904 17:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.163 [2024-11-20 17:44:10.278948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.163 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.163 "name": "Existed_Raid", 00:09:43.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.163 "strip_size_kb": 64, 00:09:43.163 "state": "configuring", 00:09:43.163 "raid_level": "concat", 00:09:43.163 "superblock": false, 00:09:43.163 "num_base_bdevs": 3, 00:09:43.163 "num_base_bdevs_discovered": 2, 00:09:43.163 "num_base_bdevs_operational": 3, 00:09:43.163 "base_bdevs_list": [ 00:09:43.163 { 00:09:43.163 "name": "BaseBdev1", 00:09:43.163 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:43.163 "is_configured": true, 00:09:43.163 "data_offset": 0, 00:09:43.163 "data_size": 65536 00:09:43.163 }, 00:09:43.163 { 00:09:43.163 "name": null, 00:09:43.163 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:43.163 "is_configured": false, 00:09:43.163 "data_offset": 0, 00:09:43.163 "data_size": 65536 00:09:43.163 }, 00:09:43.163 { 00:09:43.163 "name": "BaseBdev3", 00:09:43.163 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:43.164 "is_configured": true, 00:09:43.164 "data_offset": 0, 00:09:43.164 "data_size": 65536 00:09:43.164 } 00:09:43.164 ] 00:09:43.164 }' 00:09:43.164 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.164 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.733 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.734 [2024-11-20 17:44:10.750139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.734 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.993 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.993 "name": "Existed_Raid", 00:09:43.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.993 "strip_size_kb": 64, 00:09:43.993 "state": "configuring", 00:09:43.993 "raid_level": "concat", 00:09:43.993 "superblock": false, 00:09:43.993 "num_base_bdevs": 3, 00:09:43.993 "num_base_bdevs_discovered": 1, 00:09:43.993 "num_base_bdevs_operational": 3, 00:09:43.993 "base_bdevs_list": [ 00:09:43.993 { 00:09:43.993 "name": null, 00:09:43.993 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:43.993 "is_configured": false, 00:09:43.993 "data_offset": 0, 00:09:43.993 "data_size": 65536 00:09:43.993 }, 00:09:43.993 { 00:09:43.993 "name": null, 00:09:43.993 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:43.993 "is_configured": false, 00:09:43.993 "data_offset": 0, 00:09:43.993 "data_size": 65536 00:09:43.993 }, 00:09:43.993 { 00:09:43.993 "name": "BaseBdev3", 00:09:43.993 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:43.993 "is_configured": true, 00:09:43.993 "data_offset": 0, 00:09:43.993 "data_size": 65536 00:09:43.993 } 00:09:43.993 ] 00:09:43.993 }' 00:09:43.993 17:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.993 17:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.252 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.252 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.253 [2024-11-20 17:44:11.365816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.253 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.512 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.512 "name": "Existed_Raid", 00:09:44.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.512 "strip_size_kb": 64, 00:09:44.512 "state": "configuring", 00:09:44.512 "raid_level": "concat", 00:09:44.512 "superblock": false, 00:09:44.512 "num_base_bdevs": 3, 00:09:44.512 "num_base_bdevs_discovered": 2, 00:09:44.512 "num_base_bdevs_operational": 3, 00:09:44.512 "base_bdevs_list": [ 00:09:44.512 { 00:09:44.512 "name": null, 00:09:44.512 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:44.512 "is_configured": false, 00:09:44.512 "data_offset": 0, 00:09:44.512 "data_size": 65536 00:09:44.512 }, 00:09:44.512 { 00:09:44.512 "name": "BaseBdev2", 00:09:44.512 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:44.512 "is_configured": true, 00:09:44.512 "data_offset": 0, 00:09:44.512 "data_size": 65536 00:09:44.512 }, 00:09:44.512 { 00:09:44.512 "name": "BaseBdev3", 00:09:44.512 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:44.512 "is_configured": true, 00:09:44.512 "data_offset": 0, 00:09:44.512 "data_size": 65536 00:09:44.512 } 00:09:44.512 ] 00:09:44.512 }' 00:09:44.512 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.512 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56dc03e8-f391-4162-b2bc-a10c1411d82a 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 [2024-11-20 17:44:11.915783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.771 [2024-11-20 17:44:11.915836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.771 [2024-11-20 17:44:11.915846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:44.771 [2024-11-20 17:44:11.916165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:44.771 [2024-11-20 17:44:11.916352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.771 [2024-11-20 17:44:11.916362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:44.771 [2024-11-20 17:44:11.916676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.771 NewBaseBdev 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.771 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.772 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.772 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.772 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.772 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.031 [ 00:09:45.031 { 00:09:45.031 "name": "NewBaseBdev", 00:09:45.031 "aliases": [ 00:09:45.031 "56dc03e8-f391-4162-b2bc-a10c1411d82a" 00:09:45.031 ], 00:09:45.031 "product_name": "Malloc disk", 00:09:45.031 "block_size": 512, 00:09:45.031 "num_blocks": 65536, 00:09:45.031 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:45.031 "assigned_rate_limits": { 00:09:45.031 "rw_ios_per_sec": 0, 00:09:45.031 "rw_mbytes_per_sec": 0, 00:09:45.031 "r_mbytes_per_sec": 0, 00:09:45.031 "w_mbytes_per_sec": 0 00:09:45.031 }, 00:09:45.031 "claimed": true, 00:09:45.031 "claim_type": "exclusive_write", 00:09:45.031 "zoned": false, 00:09:45.031 "supported_io_types": { 00:09:45.031 "read": true, 00:09:45.031 "write": true, 00:09:45.031 "unmap": true, 00:09:45.031 "flush": true, 00:09:45.031 "reset": true, 00:09:45.031 "nvme_admin": false, 00:09:45.031 "nvme_io": false, 00:09:45.031 "nvme_io_md": false, 00:09:45.031 "write_zeroes": true, 00:09:45.031 "zcopy": true, 00:09:45.031 "get_zone_info": false, 00:09:45.031 "zone_management": false, 00:09:45.031 "zone_append": false, 00:09:45.031 "compare": false, 00:09:45.031 "compare_and_write": false, 00:09:45.031 "abort": true, 00:09:45.031 "seek_hole": false, 00:09:45.031 "seek_data": false, 00:09:45.031 "copy": true, 00:09:45.031 "nvme_iov_md": false 00:09:45.031 }, 00:09:45.031 "memory_domains": [ 00:09:45.031 { 00:09:45.031 "dma_device_id": "system", 00:09:45.031 "dma_device_type": 1 00:09:45.031 }, 00:09:45.031 { 00:09:45.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.031 "dma_device_type": 2 00:09:45.031 } 00:09:45.031 ], 00:09:45.031 "driver_specific": {} 00:09:45.031 } 00:09:45.031 ] 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.031 17:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.031 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.031 "name": "Existed_Raid", 00:09:45.031 "uuid": "2e1d0e03-006f-464d-8b5a-f918386b1d96", 00:09:45.031 "strip_size_kb": 64, 00:09:45.031 "state": "online", 00:09:45.031 "raid_level": "concat", 00:09:45.031 "superblock": false, 00:09:45.031 "num_base_bdevs": 3, 00:09:45.031 "num_base_bdevs_discovered": 3, 00:09:45.031 "num_base_bdevs_operational": 3, 00:09:45.031 "base_bdevs_list": [ 00:09:45.031 { 00:09:45.031 "name": "NewBaseBdev", 00:09:45.031 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:45.031 "is_configured": true, 00:09:45.031 "data_offset": 0, 00:09:45.031 "data_size": 65536 00:09:45.031 }, 00:09:45.031 { 00:09:45.031 "name": "BaseBdev2", 00:09:45.031 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:45.031 "is_configured": true, 00:09:45.031 "data_offset": 0, 00:09:45.031 "data_size": 65536 00:09:45.031 }, 00:09:45.031 { 00:09:45.031 "name": "BaseBdev3", 00:09:45.031 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:45.031 "is_configured": true, 00:09:45.031 "data_offset": 0, 00:09:45.031 "data_size": 65536 00:09:45.031 } 00:09:45.031 ] 00:09:45.031 }' 00:09:45.031 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.031 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.290 [2024-11-20 17:44:12.423340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.290 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.550 "name": "Existed_Raid", 00:09:45.550 "aliases": [ 00:09:45.550 "2e1d0e03-006f-464d-8b5a-f918386b1d96" 00:09:45.550 ], 00:09:45.550 "product_name": "Raid Volume", 00:09:45.550 "block_size": 512, 00:09:45.550 "num_blocks": 196608, 00:09:45.550 "uuid": "2e1d0e03-006f-464d-8b5a-f918386b1d96", 00:09:45.550 "assigned_rate_limits": { 00:09:45.550 "rw_ios_per_sec": 0, 00:09:45.550 "rw_mbytes_per_sec": 0, 00:09:45.550 "r_mbytes_per_sec": 0, 00:09:45.550 "w_mbytes_per_sec": 0 00:09:45.550 }, 00:09:45.550 "claimed": false, 00:09:45.550 "zoned": false, 00:09:45.550 "supported_io_types": { 00:09:45.550 "read": true, 00:09:45.550 "write": true, 00:09:45.550 "unmap": true, 00:09:45.550 "flush": true, 00:09:45.550 "reset": true, 00:09:45.550 "nvme_admin": false, 00:09:45.550 "nvme_io": false, 00:09:45.550 "nvme_io_md": false, 00:09:45.550 "write_zeroes": true, 00:09:45.550 "zcopy": false, 00:09:45.550 "get_zone_info": false, 00:09:45.550 "zone_management": false, 00:09:45.550 "zone_append": false, 00:09:45.550 "compare": false, 00:09:45.550 "compare_and_write": false, 00:09:45.550 "abort": false, 00:09:45.550 "seek_hole": false, 00:09:45.550 "seek_data": false, 00:09:45.550 "copy": false, 00:09:45.550 "nvme_iov_md": false 00:09:45.550 }, 00:09:45.550 "memory_domains": [ 00:09:45.550 { 00:09:45.550 "dma_device_id": "system", 00:09:45.550 "dma_device_type": 1 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.550 "dma_device_type": 2 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "dma_device_id": "system", 00:09:45.550 "dma_device_type": 1 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.550 "dma_device_type": 2 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "dma_device_id": "system", 00:09:45.550 "dma_device_type": 1 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.550 "dma_device_type": 2 00:09:45.550 } 00:09:45.550 ], 00:09:45.550 "driver_specific": { 00:09:45.550 "raid": { 00:09:45.550 "uuid": "2e1d0e03-006f-464d-8b5a-f918386b1d96", 00:09:45.550 "strip_size_kb": 64, 00:09:45.550 "state": "online", 00:09:45.550 "raid_level": "concat", 00:09:45.550 "superblock": false, 00:09:45.550 "num_base_bdevs": 3, 00:09:45.550 "num_base_bdevs_discovered": 3, 00:09:45.550 "num_base_bdevs_operational": 3, 00:09:45.550 "base_bdevs_list": [ 00:09:45.550 { 00:09:45.550 "name": "NewBaseBdev", 00:09:45.550 "uuid": "56dc03e8-f391-4162-b2bc-a10c1411d82a", 00:09:45.550 "is_configured": true, 00:09:45.550 "data_offset": 0, 00:09:45.550 "data_size": 65536 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "name": "BaseBdev2", 00:09:45.550 "uuid": "dee93ab9-c2fa-4646-8148-552e79b7378a", 00:09:45.550 "is_configured": true, 00:09:45.550 "data_offset": 0, 00:09:45.550 "data_size": 65536 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "name": "BaseBdev3", 00:09:45.550 "uuid": "ef57218b-a2ae-490a-afad-6b4b3b78101f", 00:09:45.550 "is_configured": true, 00:09:45.550 "data_offset": 0, 00:09:45.550 "data_size": 65536 00:09:45.550 } 00:09:45.550 ] 00:09:45.550 } 00:09:45.550 } 00:09:45.550 }' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:45.550 BaseBdev2 00:09:45.550 BaseBdev3' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.550 [2024-11-20 17:44:12.694460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.550 [2024-11-20 17:44:12.694502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.550 [2024-11-20 17:44:12.694605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.550 [2024-11-20 17:44:12.694669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.550 [2024-11-20 17:44:12.694683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65998 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65998 ']' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65998 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.550 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65998 00:09:45.810 killing process with pid 65998 00:09:45.810 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.810 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.810 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65998' 00:09:45.810 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65998 00:09:45.810 [2024-11-20 17:44:12.741831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.810 17:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65998 00:09:46.070 [2024-11-20 17:44:13.078272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:47.448 00:09:47.448 real 0m10.959s 00:09:47.448 user 0m17.140s 00:09:47.448 sys 0m2.070s 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.448 ************************************ 00:09:47.448 END TEST raid_state_function_test 00:09:47.448 ************************************ 00:09:47.448 17:44:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:47.448 17:44:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.448 17:44:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.448 17:44:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.448 ************************************ 00:09:47.448 START TEST raid_state_function_test_sb 00:09:47.448 ************************************ 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:47.448 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66625 00:09:47.449 Process raid pid: 66625 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66625' 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66625 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66625 ']' 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.449 17:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.449 [2024-11-20 17:44:14.485699] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:47.449 [2024-11-20 17:44:14.485906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.707 [2024-11-20 17:44:14.644918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.707 [2024-11-20 17:44:14.779638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.966 [2024-11-20 17:44:15.025169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.966 [2024-11-20 17:44:15.025310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 [2024-11-20 17:44:15.328930] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.225 [2024-11-20 17:44:15.329111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.225 [2024-11-20 17:44:15.329129] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.225 [2024-11-20 17:44:15.329139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.225 [2024-11-20 17:44:15.329145] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.225 [2024-11-20 17:44:15.329155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.225 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.226 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.226 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.226 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.226 "name": "Existed_Raid", 00:09:48.226 "uuid": "b09faaf1-cfa0-42bc-98d9-fa2d81450351", 00:09:48.226 "strip_size_kb": 64, 00:09:48.226 "state": "configuring", 00:09:48.226 "raid_level": "concat", 00:09:48.226 "superblock": true, 00:09:48.226 "num_base_bdevs": 3, 00:09:48.226 "num_base_bdevs_discovered": 0, 00:09:48.226 "num_base_bdevs_operational": 3, 00:09:48.226 "base_bdevs_list": [ 00:09:48.226 { 00:09:48.226 "name": "BaseBdev1", 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.226 "is_configured": false, 00:09:48.226 "data_offset": 0, 00:09:48.226 "data_size": 0 00:09:48.226 }, 00:09:48.226 { 00:09:48.226 "name": "BaseBdev2", 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.226 "is_configured": false, 00:09:48.226 "data_offset": 0, 00:09:48.226 "data_size": 0 00:09:48.226 }, 00:09:48.226 { 00:09:48.226 "name": "BaseBdev3", 00:09:48.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.226 "is_configured": false, 00:09:48.226 "data_offset": 0, 00:09:48.226 "data_size": 0 00:09:48.226 } 00:09:48.226 ] 00:09:48.226 }' 00:09:48.226 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.226 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [2024-11-20 17:44:15.808101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.795 [2024-11-20 17:44:15.808255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [2024-11-20 17:44:15.820056] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.795 [2024-11-20 17:44:15.820158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.795 [2024-11-20 17:44:15.820199] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.795 [2024-11-20 17:44:15.820224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.795 [2024-11-20 17:44:15.820274] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.795 [2024-11-20 17:44:15.820335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [2024-11-20 17:44:15.874159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.795 BaseBdev1 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.795 [ 00:09:48.795 { 00:09:48.795 "name": "BaseBdev1", 00:09:48.795 "aliases": [ 00:09:48.795 "6f74681f-b4b7-4c78-b17f-a37f36878790" 00:09:48.795 ], 00:09:48.795 "product_name": "Malloc disk", 00:09:48.795 "block_size": 512, 00:09:48.795 "num_blocks": 65536, 00:09:48.795 "uuid": "6f74681f-b4b7-4c78-b17f-a37f36878790", 00:09:48.795 "assigned_rate_limits": { 00:09:48.795 "rw_ios_per_sec": 0, 00:09:48.795 "rw_mbytes_per_sec": 0, 00:09:48.795 "r_mbytes_per_sec": 0, 00:09:48.795 "w_mbytes_per_sec": 0 00:09:48.795 }, 00:09:48.795 "claimed": true, 00:09:48.795 "claim_type": "exclusive_write", 00:09:48.795 "zoned": false, 00:09:48.795 "supported_io_types": { 00:09:48.795 "read": true, 00:09:48.795 "write": true, 00:09:48.795 "unmap": true, 00:09:48.795 "flush": true, 00:09:48.795 "reset": true, 00:09:48.795 "nvme_admin": false, 00:09:48.795 "nvme_io": false, 00:09:48.795 "nvme_io_md": false, 00:09:48.795 "write_zeroes": true, 00:09:48.795 "zcopy": true, 00:09:48.795 "get_zone_info": false, 00:09:48.795 "zone_management": false, 00:09:48.795 "zone_append": false, 00:09:48.795 "compare": false, 00:09:48.795 "compare_and_write": false, 00:09:48.795 "abort": true, 00:09:48.795 "seek_hole": false, 00:09:48.795 "seek_data": false, 00:09:48.795 "copy": true, 00:09:48.795 "nvme_iov_md": false 00:09:48.795 }, 00:09:48.795 "memory_domains": [ 00:09:48.795 { 00:09:48.795 "dma_device_id": "system", 00:09:48.795 "dma_device_type": 1 00:09:48.795 }, 00:09:48.795 { 00:09:48.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.795 "dma_device_type": 2 00:09:48.795 } 00:09:48.795 ], 00:09:48.795 "driver_specific": {} 00:09:48.795 } 00:09:48.795 ] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.795 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.796 "name": "Existed_Raid", 00:09:48.796 "uuid": "13c41169-40b9-46bb-9c62-5f25d2657c9e", 00:09:48.796 "strip_size_kb": 64, 00:09:48.796 "state": "configuring", 00:09:48.796 "raid_level": "concat", 00:09:48.796 "superblock": true, 00:09:48.796 "num_base_bdevs": 3, 00:09:48.796 "num_base_bdevs_discovered": 1, 00:09:48.796 "num_base_bdevs_operational": 3, 00:09:48.796 "base_bdevs_list": [ 00:09:48.796 { 00:09:48.796 "name": "BaseBdev1", 00:09:48.796 "uuid": "6f74681f-b4b7-4c78-b17f-a37f36878790", 00:09:48.796 "is_configured": true, 00:09:48.796 "data_offset": 2048, 00:09:48.796 "data_size": 63488 00:09:48.796 }, 00:09:48.796 { 00:09:48.796 "name": "BaseBdev2", 00:09:48.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.796 "is_configured": false, 00:09:48.796 "data_offset": 0, 00:09:48.796 "data_size": 0 00:09:48.796 }, 00:09:48.796 { 00:09:48.796 "name": "BaseBdev3", 00:09:48.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.796 "is_configured": false, 00:09:48.796 "data_offset": 0, 00:09:48.796 "data_size": 0 00:09:48.796 } 00:09:48.796 ] 00:09:48.796 }' 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.796 17:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.365 [2024-11-20 17:44:16.317487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.365 [2024-11-20 17:44:16.317567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.365 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.365 [2024-11-20 17:44:16.329510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.365 [2024-11-20 17:44:16.331777] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.366 [2024-11-20 17:44:16.331867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.366 [2024-11-20 17:44:16.331903] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:49.366 [2024-11-20 17:44:16.331928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.366 "name": "Existed_Raid", 00:09:49.366 "uuid": "5e542a5e-6b21-4ac0-9a48-1a672e30d8da", 00:09:49.366 "strip_size_kb": 64, 00:09:49.366 "state": "configuring", 00:09:49.366 "raid_level": "concat", 00:09:49.366 "superblock": true, 00:09:49.366 "num_base_bdevs": 3, 00:09:49.366 "num_base_bdevs_discovered": 1, 00:09:49.366 "num_base_bdevs_operational": 3, 00:09:49.366 "base_bdevs_list": [ 00:09:49.366 { 00:09:49.366 "name": "BaseBdev1", 00:09:49.366 "uuid": "6f74681f-b4b7-4c78-b17f-a37f36878790", 00:09:49.366 "is_configured": true, 00:09:49.366 "data_offset": 2048, 00:09:49.366 "data_size": 63488 00:09:49.366 }, 00:09:49.366 { 00:09:49.366 "name": "BaseBdev2", 00:09:49.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.366 "is_configured": false, 00:09:49.366 "data_offset": 0, 00:09:49.366 "data_size": 0 00:09:49.366 }, 00:09:49.366 { 00:09:49.366 "name": "BaseBdev3", 00:09:49.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.366 "is_configured": false, 00:09:49.366 "data_offset": 0, 00:09:49.366 "data_size": 0 00:09:49.366 } 00:09:49.366 ] 00:09:49.366 }' 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.366 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 [2024-11-20 17:44:16.858836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.939 BaseBdev2 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.939 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 [ 00:09:49.939 { 00:09:49.939 "name": "BaseBdev2", 00:09:49.939 "aliases": [ 00:09:49.939 "206b8bee-adec-4101-94fa-473d21c55d89" 00:09:49.939 ], 00:09:49.939 "product_name": "Malloc disk", 00:09:49.939 "block_size": 512, 00:09:49.939 "num_blocks": 65536, 00:09:49.939 "uuid": "206b8bee-adec-4101-94fa-473d21c55d89", 00:09:49.939 "assigned_rate_limits": { 00:09:49.939 "rw_ios_per_sec": 0, 00:09:49.939 "rw_mbytes_per_sec": 0, 00:09:49.939 "r_mbytes_per_sec": 0, 00:09:49.939 "w_mbytes_per_sec": 0 00:09:49.939 }, 00:09:49.939 "claimed": true, 00:09:49.939 "claim_type": "exclusive_write", 00:09:49.939 "zoned": false, 00:09:49.939 "supported_io_types": { 00:09:49.939 "read": true, 00:09:49.939 "write": true, 00:09:49.939 "unmap": true, 00:09:49.939 "flush": true, 00:09:49.939 "reset": true, 00:09:49.939 "nvme_admin": false, 00:09:49.939 "nvme_io": false, 00:09:49.939 "nvme_io_md": false, 00:09:49.939 "write_zeroes": true, 00:09:49.939 "zcopy": true, 00:09:49.939 "get_zone_info": false, 00:09:49.939 "zone_management": false, 00:09:49.939 "zone_append": false, 00:09:49.939 "compare": false, 00:09:49.939 "compare_and_write": false, 00:09:49.939 "abort": true, 00:09:49.939 "seek_hole": false, 00:09:49.939 "seek_data": false, 00:09:49.939 "copy": true, 00:09:49.939 "nvme_iov_md": false 00:09:49.939 }, 00:09:49.939 "memory_domains": [ 00:09:49.939 { 00:09:49.940 "dma_device_id": "system", 00:09:49.940 "dma_device_type": 1 00:09:49.940 }, 00:09:49.940 { 00:09:49.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.940 "dma_device_type": 2 00:09:49.940 } 00:09:49.940 ], 00:09:49.940 "driver_specific": {} 00:09:49.940 } 00:09:49.940 ] 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.940 "name": "Existed_Raid", 00:09:49.940 "uuid": "5e542a5e-6b21-4ac0-9a48-1a672e30d8da", 00:09:49.940 "strip_size_kb": 64, 00:09:49.940 "state": "configuring", 00:09:49.940 "raid_level": "concat", 00:09:49.940 "superblock": true, 00:09:49.940 "num_base_bdevs": 3, 00:09:49.940 "num_base_bdevs_discovered": 2, 00:09:49.940 "num_base_bdevs_operational": 3, 00:09:49.940 "base_bdevs_list": [ 00:09:49.940 { 00:09:49.940 "name": "BaseBdev1", 00:09:49.940 "uuid": "6f74681f-b4b7-4c78-b17f-a37f36878790", 00:09:49.940 "is_configured": true, 00:09:49.940 "data_offset": 2048, 00:09:49.940 "data_size": 63488 00:09:49.940 }, 00:09:49.940 { 00:09:49.940 "name": "BaseBdev2", 00:09:49.940 "uuid": "206b8bee-adec-4101-94fa-473d21c55d89", 00:09:49.940 "is_configured": true, 00:09:49.940 "data_offset": 2048, 00:09:49.940 "data_size": 63488 00:09:49.940 }, 00:09:49.940 { 00:09:49.940 "name": "BaseBdev3", 00:09:49.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.940 "is_configured": false, 00:09:49.940 "data_offset": 0, 00:09:49.940 "data_size": 0 00:09:49.940 } 00:09:49.940 ] 00:09:49.940 }' 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.940 17:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.210 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.210 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.210 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.469 [2024-11-20 17:44:17.413699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.469 [2024-11-20 17:44:17.414164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:50.469 [2024-11-20 17:44:17.414230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.469 [2024-11-20 17:44:17.414756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.469 BaseBdev3 00:09:50.469 [2024-11-20 17:44:17.414991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:50.469 [2024-11-20 17:44:17.415052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:50.469 [2024-11-20 17:44:17.415260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.469 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.469 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:50.469 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.470 [ 00:09:50.470 { 00:09:50.470 "name": "BaseBdev3", 00:09:50.470 "aliases": [ 00:09:50.470 "e09e10ab-b24b-43eb-be18-a0f28f60043b" 00:09:50.470 ], 00:09:50.470 "product_name": "Malloc disk", 00:09:50.470 "block_size": 512, 00:09:50.470 "num_blocks": 65536, 00:09:50.470 "uuid": "e09e10ab-b24b-43eb-be18-a0f28f60043b", 00:09:50.470 "assigned_rate_limits": { 00:09:50.470 "rw_ios_per_sec": 0, 00:09:50.470 "rw_mbytes_per_sec": 0, 00:09:50.470 "r_mbytes_per_sec": 0, 00:09:50.470 "w_mbytes_per_sec": 0 00:09:50.470 }, 00:09:50.470 "claimed": true, 00:09:50.470 "claim_type": "exclusive_write", 00:09:50.470 "zoned": false, 00:09:50.470 "supported_io_types": { 00:09:50.470 "read": true, 00:09:50.470 "write": true, 00:09:50.470 "unmap": true, 00:09:50.470 "flush": true, 00:09:50.470 "reset": true, 00:09:50.470 "nvme_admin": false, 00:09:50.470 "nvme_io": false, 00:09:50.470 "nvme_io_md": false, 00:09:50.470 "write_zeroes": true, 00:09:50.470 "zcopy": true, 00:09:50.470 "get_zone_info": false, 00:09:50.470 "zone_management": false, 00:09:50.470 "zone_append": false, 00:09:50.470 "compare": false, 00:09:50.470 "compare_and_write": false, 00:09:50.470 "abort": true, 00:09:50.470 "seek_hole": false, 00:09:50.470 "seek_data": false, 00:09:50.470 "copy": true, 00:09:50.470 "nvme_iov_md": false 00:09:50.470 }, 00:09:50.470 "memory_domains": [ 00:09:50.470 { 00:09:50.470 "dma_device_id": "system", 00:09:50.470 "dma_device_type": 1 00:09:50.470 }, 00:09:50.470 { 00:09:50.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.470 "dma_device_type": 2 00:09:50.470 } 00:09:50.470 ], 00:09:50.470 "driver_specific": {} 00:09:50.470 } 00:09:50.470 ] 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.470 "name": "Existed_Raid", 00:09:50.470 "uuid": "5e542a5e-6b21-4ac0-9a48-1a672e30d8da", 00:09:50.470 "strip_size_kb": 64, 00:09:50.470 "state": "online", 00:09:50.470 "raid_level": "concat", 00:09:50.470 "superblock": true, 00:09:50.470 "num_base_bdevs": 3, 00:09:50.470 "num_base_bdevs_discovered": 3, 00:09:50.470 "num_base_bdevs_operational": 3, 00:09:50.470 "base_bdevs_list": [ 00:09:50.470 { 00:09:50.470 "name": "BaseBdev1", 00:09:50.470 "uuid": "6f74681f-b4b7-4c78-b17f-a37f36878790", 00:09:50.470 "is_configured": true, 00:09:50.470 "data_offset": 2048, 00:09:50.470 "data_size": 63488 00:09:50.470 }, 00:09:50.470 { 00:09:50.470 "name": "BaseBdev2", 00:09:50.470 "uuid": "206b8bee-adec-4101-94fa-473d21c55d89", 00:09:50.470 "is_configured": true, 00:09:50.470 "data_offset": 2048, 00:09:50.470 "data_size": 63488 00:09:50.470 }, 00:09:50.470 { 00:09:50.470 "name": "BaseBdev3", 00:09:50.470 "uuid": "e09e10ab-b24b-43eb-be18-a0f28f60043b", 00:09:50.470 "is_configured": true, 00:09:50.470 "data_offset": 2048, 00:09:50.470 "data_size": 63488 00:09:50.470 } 00:09:50.470 ] 00:09:50.470 }' 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.470 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.730 [2024-11-20 17:44:17.881399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.730 17:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.990 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.990 "name": "Existed_Raid", 00:09:50.990 "aliases": [ 00:09:50.990 "5e542a5e-6b21-4ac0-9a48-1a672e30d8da" 00:09:50.990 ], 00:09:50.990 "product_name": "Raid Volume", 00:09:50.990 "block_size": 512, 00:09:50.990 "num_blocks": 190464, 00:09:50.990 "uuid": "5e542a5e-6b21-4ac0-9a48-1a672e30d8da", 00:09:50.990 "assigned_rate_limits": { 00:09:50.990 "rw_ios_per_sec": 0, 00:09:50.990 "rw_mbytes_per_sec": 0, 00:09:50.990 "r_mbytes_per_sec": 0, 00:09:50.990 "w_mbytes_per_sec": 0 00:09:50.990 }, 00:09:50.990 "claimed": false, 00:09:50.990 "zoned": false, 00:09:50.990 "supported_io_types": { 00:09:50.990 "read": true, 00:09:50.990 "write": true, 00:09:50.990 "unmap": true, 00:09:50.990 "flush": true, 00:09:50.990 "reset": true, 00:09:50.990 "nvme_admin": false, 00:09:50.990 "nvme_io": false, 00:09:50.990 "nvme_io_md": false, 00:09:50.990 "write_zeroes": true, 00:09:50.990 "zcopy": false, 00:09:50.990 "get_zone_info": false, 00:09:50.990 "zone_management": false, 00:09:50.990 "zone_append": false, 00:09:50.990 "compare": false, 00:09:50.990 "compare_and_write": false, 00:09:50.990 "abort": false, 00:09:50.990 "seek_hole": false, 00:09:50.990 "seek_data": false, 00:09:50.990 "copy": false, 00:09:50.990 "nvme_iov_md": false 00:09:50.990 }, 00:09:50.990 "memory_domains": [ 00:09:50.990 { 00:09:50.990 "dma_device_id": "system", 00:09:50.990 "dma_device_type": 1 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.990 "dma_device_type": 2 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "dma_device_id": "system", 00:09:50.990 "dma_device_type": 1 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.990 "dma_device_type": 2 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "dma_device_id": "system", 00:09:50.990 "dma_device_type": 1 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.990 "dma_device_type": 2 00:09:50.990 } 00:09:50.990 ], 00:09:50.990 "driver_specific": { 00:09:50.990 "raid": { 00:09:50.990 "uuid": "5e542a5e-6b21-4ac0-9a48-1a672e30d8da", 00:09:50.990 "strip_size_kb": 64, 00:09:50.990 "state": "online", 00:09:50.990 "raid_level": "concat", 00:09:50.990 "superblock": true, 00:09:50.990 "num_base_bdevs": 3, 00:09:50.990 "num_base_bdevs_discovered": 3, 00:09:50.990 "num_base_bdevs_operational": 3, 00:09:50.990 "base_bdevs_list": [ 00:09:50.990 { 00:09:50.990 "name": "BaseBdev1", 00:09:50.990 "uuid": "6f74681f-b4b7-4c78-b17f-a37f36878790", 00:09:50.990 "is_configured": true, 00:09:50.990 "data_offset": 2048, 00:09:50.990 "data_size": 63488 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "name": "BaseBdev2", 00:09:50.990 "uuid": "206b8bee-adec-4101-94fa-473d21c55d89", 00:09:50.990 "is_configured": true, 00:09:50.990 "data_offset": 2048, 00:09:50.990 "data_size": 63488 00:09:50.990 }, 00:09:50.990 { 00:09:50.990 "name": "BaseBdev3", 00:09:50.990 "uuid": "e09e10ab-b24b-43eb-be18-a0f28f60043b", 00:09:50.990 "is_configured": true, 00:09:50.990 "data_offset": 2048, 00:09:50.990 "data_size": 63488 00:09:50.990 } 00:09:50.990 ] 00:09:50.990 } 00:09:50.990 } 00:09:50.990 }' 00:09:50.990 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.990 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.990 BaseBdev2 00:09:50.990 BaseBdev3' 00:09:50.990 17:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.990 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.250 [2024-11-20 17:44:18.180497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.250 [2024-11-20 17:44:18.180544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.250 [2024-11-20 17:44:18.180609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.250 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.250 "name": "Existed_Raid", 00:09:51.251 "uuid": "5e542a5e-6b21-4ac0-9a48-1a672e30d8da", 00:09:51.251 "strip_size_kb": 64, 00:09:51.251 "state": "offline", 00:09:51.251 "raid_level": "concat", 00:09:51.251 "superblock": true, 00:09:51.251 "num_base_bdevs": 3, 00:09:51.251 "num_base_bdevs_discovered": 2, 00:09:51.251 "num_base_bdevs_operational": 2, 00:09:51.251 "base_bdevs_list": [ 00:09:51.251 { 00:09:51.251 "name": null, 00:09:51.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.251 "is_configured": false, 00:09:51.251 "data_offset": 0, 00:09:51.251 "data_size": 63488 00:09:51.251 }, 00:09:51.251 { 00:09:51.251 "name": "BaseBdev2", 00:09:51.251 "uuid": "206b8bee-adec-4101-94fa-473d21c55d89", 00:09:51.251 "is_configured": true, 00:09:51.251 "data_offset": 2048, 00:09:51.251 "data_size": 63488 00:09:51.251 }, 00:09:51.251 { 00:09:51.251 "name": "BaseBdev3", 00:09:51.251 "uuid": "e09e10ab-b24b-43eb-be18-a0f28f60043b", 00:09:51.251 "is_configured": true, 00:09:51.251 "data_offset": 2048, 00:09:51.251 "data_size": 63488 00:09:51.251 } 00:09:51.251 ] 00:09:51.251 }' 00:09:51.251 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.251 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 [2024-11-20 17:44:18.809002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.821 17:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 [2024-11-20 17:44:18.975710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.821 [2024-11-20 17:44:18.975876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.081 BaseBdev2 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.081 [ 00:09:52.081 { 00:09:52.081 "name": "BaseBdev2", 00:09:52.081 "aliases": [ 00:09:52.081 "88f44f35-0083-4024-a546-a2646249346c" 00:09:52.081 ], 00:09:52.081 "product_name": "Malloc disk", 00:09:52.081 "block_size": 512, 00:09:52.081 "num_blocks": 65536, 00:09:52.081 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:52.081 "assigned_rate_limits": { 00:09:52.081 "rw_ios_per_sec": 0, 00:09:52.081 "rw_mbytes_per_sec": 0, 00:09:52.081 "r_mbytes_per_sec": 0, 00:09:52.081 "w_mbytes_per_sec": 0 00:09:52.081 }, 00:09:52.081 "claimed": false, 00:09:52.081 "zoned": false, 00:09:52.081 "supported_io_types": { 00:09:52.081 "read": true, 00:09:52.081 "write": true, 00:09:52.081 "unmap": true, 00:09:52.081 "flush": true, 00:09:52.081 "reset": true, 00:09:52.081 "nvme_admin": false, 00:09:52.081 "nvme_io": false, 00:09:52.081 "nvme_io_md": false, 00:09:52.081 "write_zeroes": true, 00:09:52.081 "zcopy": true, 00:09:52.081 "get_zone_info": false, 00:09:52.081 "zone_management": false, 00:09:52.081 "zone_append": false, 00:09:52.081 "compare": false, 00:09:52.081 "compare_and_write": false, 00:09:52.081 "abort": true, 00:09:52.081 "seek_hole": false, 00:09:52.081 "seek_data": false, 00:09:52.081 "copy": true, 00:09:52.081 "nvme_iov_md": false 00:09:52.081 }, 00:09:52.081 "memory_domains": [ 00:09:52.081 { 00:09:52.081 "dma_device_id": "system", 00:09:52.081 "dma_device_type": 1 00:09:52.081 }, 00:09:52.081 { 00:09:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.081 "dma_device_type": 2 00:09:52.081 } 00:09:52.081 ], 00:09:52.081 "driver_specific": {} 00:09:52.081 } 00:09:52.081 ] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.081 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.341 BaseBdev3 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.341 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.342 [ 00:09:52.342 { 00:09:52.342 "name": "BaseBdev3", 00:09:52.342 "aliases": [ 00:09:52.342 "514d19ff-46d1-435f-b897-19b776ddbaa1" 00:09:52.342 ], 00:09:52.342 "product_name": "Malloc disk", 00:09:52.342 "block_size": 512, 00:09:52.342 "num_blocks": 65536, 00:09:52.342 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:52.342 "assigned_rate_limits": { 00:09:52.342 "rw_ios_per_sec": 0, 00:09:52.342 "rw_mbytes_per_sec": 0, 00:09:52.342 "r_mbytes_per_sec": 0, 00:09:52.342 "w_mbytes_per_sec": 0 00:09:52.342 }, 00:09:52.342 "claimed": false, 00:09:52.342 "zoned": false, 00:09:52.342 "supported_io_types": { 00:09:52.342 "read": true, 00:09:52.342 "write": true, 00:09:52.342 "unmap": true, 00:09:52.342 "flush": true, 00:09:52.342 "reset": true, 00:09:52.342 "nvme_admin": false, 00:09:52.342 "nvme_io": false, 00:09:52.342 "nvme_io_md": false, 00:09:52.342 "write_zeroes": true, 00:09:52.342 "zcopy": true, 00:09:52.342 "get_zone_info": false, 00:09:52.342 "zone_management": false, 00:09:52.342 "zone_append": false, 00:09:52.342 "compare": false, 00:09:52.342 "compare_and_write": false, 00:09:52.342 "abort": true, 00:09:52.342 "seek_hole": false, 00:09:52.342 "seek_data": false, 00:09:52.342 "copy": true, 00:09:52.342 "nvme_iov_md": false 00:09:52.342 }, 00:09:52.342 "memory_domains": [ 00:09:52.342 { 00:09:52.342 "dma_device_id": "system", 00:09:52.342 "dma_device_type": 1 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.342 "dma_device_type": 2 00:09:52.342 } 00:09:52.342 ], 00:09:52.342 "driver_specific": {} 00:09:52.342 } 00:09:52.342 ] 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.342 [2024-11-20 17:44:19.311375] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.342 [2024-11-20 17:44:19.311518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.342 [2024-11-20 17:44:19.311571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.342 [2024-11-20 17:44:19.313717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.342 "name": "Existed_Raid", 00:09:52.342 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:52.342 "strip_size_kb": 64, 00:09:52.342 "state": "configuring", 00:09:52.342 "raid_level": "concat", 00:09:52.342 "superblock": true, 00:09:52.342 "num_base_bdevs": 3, 00:09:52.342 "num_base_bdevs_discovered": 2, 00:09:52.342 "num_base_bdevs_operational": 3, 00:09:52.342 "base_bdevs_list": [ 00:09:52.342 { 00:09:52.342 "name": "BaseBdev1", 00:09:52.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.342 "is_configured": false, 00:09:52.342 "data_offset": 0, 00:09:52.342 "data_size": 0 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "name": "BaseBdev2", 00:09:52.342 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:52.342 "is_configured": true, 00:09:52.342 "data_offset": 2048, 00:09:52.342 "data_size": 63488 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "name": "BaseBdev3", 00:09:52.342 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:52.342 "is_configured": true, 00:09:52.342 "data_offset": 2048, 00:09:52.342 "data_size": 63488 00:09:52.342 } 00:09:52.342 ] 00:09:52.342 }' 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.342 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.602 [2024-11-20 17:44:19.726696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.602 "name": "Existed_Raid", 00:09:52.602 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:52.602 "strip_size_kb": 64, 00:09:52.602 "state": "configuring", 00:09:52.602 "raid_level": "concat", 00:09:52.602 "superblock": true, 00:09:52.602 "num_base_bdevs": 3, 00:09:52.602 "num_base_bdevs_discovered": 1, 00:09:52.602 "num_base_bdevs_operational": 3, 00:09:52.602 "base_bdevs_list": [ 00:09:52.602 { 00:09:52.602 "name": "BaseBdev1", 00:09:52.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.602 "is_configured": false, 00:09:52.602 "data_offset": 0, 00:09:52.602 "data_size": 0 00:09:52.602 }, 00:09:52.602 { 00:09:52.602 "name": null, 00:09:52.602 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:52.602 "is_configured": false, 00:09:52.602 "data_offset": 0, 00:09:52.602 "data_size": 63488 00:09:52.602 }, 00:09:52.602 { 00:09:52.602 "name": "BaseBdev3", 00:09:52.602 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:52.602 "is_configured": true, 00:09:52.602 "data_offset": 2048, 00:09:52.602 "data_size": 63488 00:09:52.602 } 00:09:52.602 ] 00:09:52.602 }' 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.602 17:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.172 [2024-11-20 17:44:20.229302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.172 BaseBdev1 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.172 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.172 [ 00:09:53.172 { 00:09:53.172 "name": "BaseBdev1", 00:09:53.172 "aliases": [ 00:09:53.172 "6e84ff82-0ad4-4781-8738-b56da61f1a9d" 00:09:53.172 ], 00:09:53.172 "product_name": "Malloc disk", 00:09:53.172 "block_size": 512, 00:09:53.172 "num_blocks": 65536, 00:09:53.172 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:53.172 "assigned_rate_limits": { 00:09:53.172 "rw_ios_per_sec": 0, 00:09:53.172 "rw_mbytes_per_sec": 0, 00:09:53.172 "r_mbytes_per_sec": 0, 00:09:53.172 "w_mbytes_per_sec": 0 00:09:53.172 }, 00:09:53.172 "claimed": true, 00:09:53.172 "claim_type": "exclusive_write", 00:09:53.172 "zoned": false, 00:09:53.172 "supported_io_types": { 00:09:53.172 "read": true, 00:09:53.172 "write": true, 00:09:53.172 "unmap": true, 00:09:53.172 "flush": true, 00:09:53.172 "reset": true, 00:09:53.172 "nvme_admin": false, 00:09:53.172 "nvme_io": false, 00:09:53.172 "nvme_io_md": false, 00:09:53.172 "write_zeroes": true, 00:09:53.172 "zcopy": true, 00:09:53.172 "get_zone_info": false, 00:09:53.172 "zone_management": false, 00:09:53.172 "zone_append": false, 00:09:53.172 "compare": false, 00:09:53.172 "compare_and_write": false, 00:09:53.172 "abort": true, 00:09:53.173 "seek_hole": false, 00:09:53.173 "seek_data": false, 00:09:53.173 "copy": true, 00:09:53.173 "nvme_iov_md": false 00:09:53.173 }, 00:09:53.173 "memory_domains": [ 00:09:53.173 { 00:09:53.173 "dma_device_id": "system", 00:09:53.173 "dma_device_type": 1 00:09:53.173 }, 00:09:53.173 { 00:09:53.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.173 "dma_device_type": 2 00:09:53.173 } 00:09:53.173 ], 00:09:53.173 "driver_specific": {} 00:09:53.173 } 00:09:53.173 ] 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.173 "name": "Existed_Raid", 00:09:53.173 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:53.173 "strip_size_kb": 64, 00:09:53.173 "state": "configuring", 00:09:53.173 "raid_level": "concat", 00:09:53.173 "superblock": true, 00:09:53.173 "num_base_bdevs": 3, 00:09:53.173 "num_base_bdevs_discovered": 2, 00:09:53.173 "num_base_bdevs_operational": 3, 00:09:53.173 "base_bdevs_list": [ 00:09:53.173 { 00:09:53.173 "name": "BaseBdev1", 00:09:53.173 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:53.173 "is_configured": true, 00:09:53.173 "data_offset": 2048, 00:09:53.173 "data_size": 63488 00:09:53.173 }, 00:09:53.173 { 00:09:53.173 "name": null, 00:09:53.173 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:53.173 "is_configured": false, 00:09:53.173 "data_offset": 0, 00:09:53.173 "data_size": 63488 00:09:53.173 }, 00:09:53.173 { 00:09:53.173 "name": "BaseBdev3", 00:09:53.173 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:53.173 "is_configured": true, 00:09:53.173 "data_offset": 2048, 00:09:53.173 "data_size": 63488 00:09:53.173 } 00:09:53.173 ] 00:09:53.173 }' 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.173 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.743 [2024-11-20 17:44:20.732509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.743 "name": "Existed_Raid", 00:09:53.743 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:53.743 "strip_size_kb": 64, 00:09:53.743 "state": "configuring", 00:09:53.743 "raid_level": "concat", 00:09:53.743 "superblock": true, 00:09:53.743 "num_base_bdevs": 3, 00:09:53.743 "num_base_bdevs_discovered": 1, 00:09:53.743 "num_base_bdevs_operational": 3, 00:09:53.743 "base_bdevs_list": [ 00:09:53.743 { 00:09:53.743 "name": "BaseBdev1", 00:09:53.743 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:53.743 "is_configured": true, 00:09:53.743 "data_offset": 2048, 00:09:53.743 "data_size": 63488 00:09:53.743 }, 00:09:53.743 { 00:09:53.743 "name": null, 00:09:53.743 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:53.743 "is_configured": false, 00:09:53.743 "data_offset": 0, 00:09:53.743 "data_size": 63488 00:09:53.743 }, 00:09:53.743 { 00:09:53.743 "name": null, 00:09:53.743 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:53.743 "is_configured": false, 00:09:53.743 "data_offset": 0, 00:09:53.743 "data_size": 63488 00:09:53.743 } 00:09:53.743 ] 00:09:53.743 }' 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.743 17:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.313 [2024-11-20 17:44:21.251705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.313 "name": "Existed_Raid", 00:09:54.313 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:54.313 "strip_size_kb": 64, 00:09:54.313 "state": "configuring", 00:09:54.313 "raid_level": "concat", 00:09:54.313 "superblock": true, 00:09:54.313 "num_base_bdevs": 3, 00:09:54.313 "num_base_bdevs_discovered": 2, 00:09:54.313 "num_base_bdevs_operational": 3, 00:09:54.313 "base_bdevs_list": [ 00:09:54.313 { 00:09:54.313 "name": "BaseBdev1", 00:09:54.313 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:54.313 "is_configured": true, 00:09:54.313 "data_offset": 2048, 00:09:54.313 "data_size": 63488 00:09:54.313 }, 00:09:54.313 { 00:09:54.313 "name": null, 00:09:54.313 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:54.313 "is_configured": false, 00:09:54.313 "data_offset": 0, 00:09:54.313 "data_size": 63488 00:09:54.313 }, 00:09:54.313 { 00:09:54.313 "name": "BaseBdev3", 00:09:54.313 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:54.313 "is_configured": true, 00:09:54.313 "data_offset": 2048, 00:09:54.313 "data_size": 63488 00:09:54.313 } 00:09:54.313 ] 00:09:54.313 }' 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.313 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 [2024-11-20 17:44:21.738871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.833 "name": "Existed_Raid", 00:09:54.833 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:54.833 "strip_size_kb": 64, 00:09:54.833 "state": "configuring", 00:09:54.833 "raid_level": "concat", 00:09:54.833 "superblock": true, 00:09:54.833 "num_base_bdevs": 3, 00:09:54.833 "num_base_bdevs_discovered": 1, 00:09:54.833 "num_base_bdevs_operational": 3, 00:09:54.833 "base_bdevs_list": [ 00:09:54.833 { 00:09:54.833 "name": null, 00:09:54.833 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:54.833 "is_configured": false, 00:09:54.833 "data_offset": 0, 00:09:54.833 "data_size": 63488 00:09:54.833 }, 00:09:54.833 { 00:09:54.833 "name": null, 00:09:54.833 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:54.833 "is_configured": false, 00:09:54.833 "data_offset": 0, 00:09:54.833 "data_size": 63488 00:09:54.833 }, 00:09:54.833 { 00:09:54.833 "name": "BaseBdev3", 00:09:54.833 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:54.833 "is_configured": true, 00:09:54.833 "data_offset": 2048, 00:09:54.833 "data_size": 63488 00:09:54.833 } 00:09:54.833 ] 00:09:54.833 }' 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.833 17:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 [2024-11-20 17:44:22.346424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.402 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.402 "name": "Existed_Raid", 00:09:55.402 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:55.402 "strip_size_kb": 64, 00:09:55.402 "state": "configuring", 00:09:55.402 "raid_level": "concat", 00:09:55.402 "superblock": true, 00:09:55.402 "num_base_bdevs": 3, 00:09:55.402 "num_base_bdevs_discovered": 2, 00:09:55.402 "num_base_bdevs_operational": 3, 00:09:55.402 "base_bdevs_list": [ 00:09:55.402 { 00:09:55.402 "name": null, 00:09:55.402 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:55.402 "is_configured": false, 00:09:55.402 "data_offset": 0, 00:09:55.402 "data_size": 63488 00:09:55.402 }, 00:09:55.402 { 00:09:55.402 "name": "BaseBdev2", 00:09:55.402 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:55.403 "is_configured": true, 00:09:55.403 "data_offset": 2048, 00:09:55.403 "data_size": 63488 00:09:55.403 }, 00:09:55.403 { 00:09:55.403 "name": "BaseBdev3", 00:09:55.403 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:55.403 "is_configured": true, 00:09:55.403 "data_offset": 2048, 00:09:55.403 "data_size": 63488 00:09:55.403 } 00:09:55.403 ] 00:09:55.403 }' 00:09:55.403 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.403 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.662 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.662 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.662 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.662 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.662 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6e84ff82-0ad4-4781-8738-b56da61f1a9d 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.922 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.922 [2024-11-20 17:44:22.948521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:55.922 [2024-11-20 17:44:22.948783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.922 [2024-11-20 17:44:22.948802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.922 [2024-11-20 17:44:22.949117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:55.922 [2024-11-20 17:44:22.949300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.922 [2024-11-20 17:44:22.949311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:55.922 NewBaseBdev 00:09:55.922 [2024-11-20 17:44:22.949458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.923 [ 00:09:55.923 { 00:09:55.923 "name": "NewBaseBdev", 00:09:55.923 "aliases": [ 00:09:55.923 "6e84ff82-0ad4-4781-8738-b56da61f1a9d" 00:09:55.923 ], 00:09:55.923 "product_name": "Malloc disk", 00:09:55.923 "block_size": 512, 00:09:55.923 "num_blocks": 65536, 00:09:55.923 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:55.923 "assigned_rate_limits": { 00:09:55.923 "rw_ios_per_sec": 0, 00:09:55.923 "rw_mbytes_per_sec": 0, 00:09:55.923 "r_mbytes_per_sec": 0, 00:09:55.923 "w_mbytes_per_sec": 0 00:09:55.923 }, 00:09:55.923 "claimed": true, 00:09:55.923 "claim_type": "exclusive_write", 00:09:55.923 "zoned": false, 00:09:55.923 "supported_io_types": { 00:09:55.923 "read": true, 00:09:55.923 "write": true, 00:09:55.923 "unmap": true, 00:09:55.923 "flush": true, 00:09:55.923 "reset": true, 00:09:55.923 "nvme_admin": false, 00:09:55.923 "nvme_io": false, 00:09:55.923 "nvme_io_md": false, 00:09:55.923 "write_zeroes": true, 00:09:55.923 "zcopy": true, 00:09:55.923 "get_zone_info": false, 00:09:55.923 "zone_management": false, 00:09:55.923 "zone_append": false, 00:09:55.923 "compare": false, 00:09:55.923 "compare_and_write": false, 00:09:55.923 "abort": true, 00:09:55.923 "seek_hole": false, 00:09:55.923 "seek_data": false, 00:09:55.923 "copy": true, 00:09:55.923 "nvme_iov_md": false 00:09:55.923 }, 00:09:55.923 "memory_domains": [ 00:09:55.923 { 00:09:55.923 "dma_device_id": "system", 00:09:55.923 "dma_device_type": 1 00:09:55.923 }, 00:09:55.923 { 00:09:55.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.923 "dma_device_type": 2 00:09:55.923 } 00:09:55.923 ], 00:09:55.923 "driver_specific": {} 00:09:55.923 } 00:09:55.923 ] 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.923 17:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.923 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.923 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.923 "name": "Existed_Raid", 00:09:55.923 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:55.923 "strip_size_kb": 64, 00:09:55.923 "state": "online", 00:09:55.923 "raid_level": "concat", 00:09:55.923 "superblock": true, 00:09:55.923 "num_base_bdevs": 3, 00:09:55.923 "num_base_bdevs_discovered": 3, 00:09:55.923 "num_base_bdevs_operational": 3, 00:09:55.923 "base_bdevs_list": [ 00:09:55.923 { 00:09:55.923 "name": "NewBaseBdev", 00:09:55.923 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:55.923 "is_configured": true, 00:09:55.923 "data_offset": 2048, 00:09:55.923 "data_size": 63488 00:09:55.923 }, 00:09:55.923 { 00:09:55.923 "name": "BaseBdev2", 00:09:55.923 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:55.923 "is_configured": true, 00:09:55.923 "data_offset": 2048, 00:09:55.923 "data_size": 63488 00:09:55.923 }, 00:09:55.923 { 00:09:55.923 "name": "BaseBdev3", 00:09:55.923 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:55.923 "is_configured": true, 00:09:55.923 "data_offset": 2048, 00:09:55.923 "data_size": 63488 00:09:55.923 } 00:09:55.923 ] 00:09:55.923 }' 00:09:55.923 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.923 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.492 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.493 [2024-11-20 17:44:23.424160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.493 "name": "Existed_Raid", 00:09:56.493 "aliases": [ 00:09:56.493 "63209b1d-bb70-415f-b387-40e25d0058fe" 00:09:56.493 ], 00:09:56.493 "product_name": "Raid Volume", 00:09:56.493 "block_size": 512, 00:09:56.493 "num_blocks": 190464, 00:09:56.493 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:56.493 "assigned_rate_limits": { 00:09:56.493 "rw_ios_per_sec": 0, 00:09:56.493 "rw_mbytes_per_sec": 0, 00:09:56.493 "r_mbytes_per_sec": 0, 00:09:56.493 "w_mbytes_per_sec": 0 00:09:56.493 }, 00:09:56.493 "claimed": false, 00:09:56.493 "zoned": false, 00:09:56.493 "supported_io_types": { 00:09:56.493 "read": true, 00:09:56.493 "write": true, 00:09:56.493 "unmap": true, 00:09:56.493 "flush": true, 00:09:56.493 "reset": true, 00:09:56.493 "nvme_admin": false, 00:09:56.493 "nvme_io": false, 00:09:56.493 "nvme_io_md": false, 00:09:56.493 "write_zeroes": true, 00:09:56.493 "zcopy": false, 00:09:56.493 "get_zone_info": false, 00:09:56.493 "zone_management": false, 00:09:56.493 "zone_append": false, 00:09:56.493 "compare": false, 00:09:56.493 "compare_and_write": false, 00:09:56.493 "abort": false, 00:09:56.493 "seek_hole": false, 00:09:56.493 "seek_data": false, 00:09:56.493 "copy": false, 00:09:56.493 "nvme_iov_md": false 00:09:56.493 }, 00:09:56.493 "memory_domains": [ 00:09:56.493 { 00:09:56.493 "dma_device_id": "system", 00:09:56.493 "dma_device_type": 1 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.493 "dma_device_type": 2 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "dma_device_id": "system", 00:09:56.493 "dma_device_type": 1 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.493 "dma_device_type": 2 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "dma_device_id": "system", 00:09:56.493 "dma_device_type": 1 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.493 "dma_device_type": 2 00:09:56.493 } 00:09:56.493 ], 00:09:56.493 "driver_specific": { 00:09:56.493 "raid": { 00:09:56.493 "uuid": "63209b1d-bb70-415f-b387-40e25d0058fe", 00:09:56.493 "strip_size_kb": 64, 00:09:56.493 "state": "online", 00:09:56.493 "raid_level": "concat", 00:09:56.493 "superblock": true, 00:09:56.493 "num_base_bdevs": 3, 00:09:56.493 "num_base_bdevs_discovered": 3, 00:09:56.493 "num_base_bdevs_operational": 3, 00:09:56.493 "base_bdevs_list": [ 00:09:56.493 { 00:09:56.493 "name": "NewBaseBdev", 00:09:56.493 "uuid": "6e84ff82-0ad4-4781-8738-b56da61f1a9d", 00:09:56.493 "is_configured": true, 00:09:56.493 "data_offset": 2048, 00:09:56.493 "data_size": 63488 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "name": "BaseBdev2", 00:09:56.493 "uuid": "88f44f35-0083-4024-a546-a2646249346c", 00:09:56.493 "is_configured": true, 00:09:56.493 "data_offset": 2048, 00:09:56.493 "data_size": 63488 00:09:56.493 }, 00:09:56.493 { 00:09:56.493 "name": "BaseBdev3", 00:09:56.493 "uuid": "514d19ff-46d1-435f-b897-19b776ddbaa1", 00:09:56.493 "is_configured": true, 00:09:56.493 "data_offset": 2048, 00:09:56.493 "data_size": 63488 00:09:56.493 } 00:09:56.493 ] 00:09:56.493 } 00:09:56.493 } 00:09:56.493 }' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:56.493 BaseBdev2 00:09:56.493 BaseBdev3' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.493 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.753 [2024-11-20 17:44:23.683344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.753 [2024-11-20 17:44:23.683469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.753 [2024-11-20 17:44:23.683581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.753 [2024-11-20 17:44:23.683664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.753 [2024-11-20 17:44:23.683743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66625 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66625 ']' 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66625 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66625 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66625' 00:09:56.753 killing process with pid 66625 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66625 00:09:56.753 [2024-11-20 17:44:23.731100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.753 17:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66625 00:09:57.012 [2024-11-20 17:44:24.062500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.411 17:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:58.411 00:09:58.411 real 0m10.933s 00:09:58.411 user 0m17.173s 00:09:58.411 sys 0m1.947s 00:09:58.411 17:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.411 17:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.411 ************************************ 00:09:58.411 END TEST raid_state_function_test_sb 00:09:58.411 ************************************ 00:09:58.412 17:44:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:58.412 17:44:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.412 17:44:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.412 17:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.412 ************************************ 00:09:58.412 START TEST raid_superblock_test 00:09:58.412 ************************************ 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67245 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67245 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67245 ']' 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.412 17:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.412 [2024-11-20 17:44:25.486904] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:58.412 [2024-11-20 17:44:25.487167] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67245 ] 00:09:58.671 [2024-11-20 17:44:25.663911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.671 [2024-11-20 17:44:25.804230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.931 [2024-11-20 17:44:26.042791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.931 [2024-11-20 17:44:26.042950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.192 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 malloc1 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 [2024-11-20 17:44:26.409407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:59.452 [2024-11-20 17:44:26.409486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.452 [2024-11-20 17:44:26.409511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:59.452 [2024-11-20 17:44:26.409521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.452 [2024-11-20 17:44:26.411898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.452 [2024-11-20 17:44:26.412020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:59.452 pt1 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 malloc2 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 [2024-11-20 17:44:26.475871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.452 [2024-11-20 17:44:26.476030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.452 [2024-11-20 17:44:26.476078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:59.452 [2024-11-20 17:44:26.476107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.452 [2024-11-20 17:44:26.478477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.452 [2024-11-20 17:44:26.478549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.452 pt2 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 malloc3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 [2024-11-20 17:44:26.552734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.452 [2024-11-20 17:44:26.552880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.452 [2024-11-20 17:44:26.552923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:59.452 [2024-11-20 17:44:26.552954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.452 [2024-11-20 17:44:26.555379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.452 [2024-11-20 17:44:26.555452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.452 pt3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 [2024-11-20 17:44:26.564767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.452 [2024-11-20 17:44:26.566907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.452 [2024-11-20 17:44:26.567023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.452 [2024-11-20 17:44:26.567216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:59.452 [2024-11-20 17:44:26.567265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.452 [2024-11-20 17:44:26.567533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:59.452 [2024-11-20 17:44:26.567744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:59.452 [2024-11-20 17:44:26.567782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:59.452 [2024-11-20 17:44:26.567965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.452 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.453 "name": "raid_bdev1", 00:09:59.453 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:09:59.453 "strip_size_kb": 64, 00:09:59.453 "state": "online", 00:09:59.453 "raid_level": "concat", 00:09:59.453 "superblock": true, 00:09:59.453 "num_base_bdevs": 3, 00:09:59.453 "num_base_bdevs_discovered": 3, 00:09:59.453 "num_base_bdevs_operational": 3, 00:09:59.453 "base_bdevs_list": [ 00:09:59.453 { 00:09:59.453 "name": "pt1", 00:09:59.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.453 "is_configured": true, 00:09:59.453 "data_offset": 2048, 00:09:59.453 "data_size": 63488 00:09:59.453 }, 00:09:59.453 { 00:09:59.453 "name": "pt2", 00:09:59.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.453 "is_configured": true, 00:09:59.453 "data_offset": 2048, 00:09:59.453 "data_size": 63488 00:09:59.453 }, 00:09:59.453 { 00:09:59.453 "name": "pt3", 00:09:59.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.453 "is_configured": true, 00:09:59.453 "data_offset": 2048, 00:09:59.453 "data_size": 63488 00:09:59.453 } 00:09:59.453 ] 00:09:59.453 }' 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.453 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.023 [2024-11-20 17:44:26.960458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.023 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.023 "name": "raid_bdev1", 00:10:00.023 "aliases": [ 00:10:00.023 "e9d36db6-b424-41b7-9360-4e7f16e7d639" 00:10:00.023 ], 00:10:00.023 "product_name": "Raid Volume", 00:10:00.023 "block_size": 512, 00:10:00.023 "num_blocks": 190464, 00:10:00.023 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:00.023 "assigned_rate_limits": { 00:10:00.023 "rw_ios_per_sec": 0, 00:10:00.023 "rw_mbytes_per_sec": 0, 00:10:00.023 "r_mbytes_per_sec": 0, 00:10:00.023 "w_mbytes_per_sec": 0 00:10:00.023 }, 00:10:00.023 "claimed": false, 00:10:00.023 "zoned": false, 00:10:00.023 "supported_io_types": { 00:10:00.023 "read": true, 00:10:00.023 "write": true, 00:10:00.023 "unmap": true, 00:10:00.023 "flush": true, 00:10:00.023 "reset": true, 00:10:00.023 "nvme_admin": false, 00:10:00.023 "nvme_io": false, 00:10:00.023 "nvme_io_md": false, 00:10:00.023 "write_zeroes": true, 00:10:00.023 "zcopy": false, 00:10:00.023 "get_zone_info": false, 00:10:00.023 "zone_management": false, 00:10:00.023 "zone_append": false, 00:10:00.023 "compare": false, 00:10:00.023 "compare_and_write": false, 00:10:00.023 "abort": false, 00:10:00.023 "seek_hole": false, 00:10:00.023 "seek_data": false, 00:10:00.023 "copy": false, 00:10:00.023 "nvme_iov_md": false 00:10:00.023 }, 00:10:00.023 "memory_domains": [ 00:10:00.023 { 00:10:00.023 "dma_device_id": "system", 00:10:00.023 "dma_device_type": 1 00:10:00.023 }, 00:10:00.023 { 00:10:00.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.023 "dma_device_type": 2 00:10:00.023 }, 00:10:00.023 { 00:10:00.023 "dma_device_id": "system", 00:10:00.023 "dma_device_type": 1 00:10:00.023 }, 00:10:00.023 { 00:10:00.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.023 "dma_device_type": 2 00:10:00.023 }, 00:10:00.023 { 00:10:00.023 "dma_device_id": "system", 00:10:00.023 "dma_device_type": 1 00:10:00.023 }, 00:10:00.023 { 00:10:00.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.023 "dma_device_type": 2 00:10:00.023 } 00:10:00.023 ], 00:10:00.023 "driver_specific": { 00:10:00.023 "raid": { 00:10:00.023 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:00.023 "strip_size_kb": 64, 00:10:00.023 "state": "online", 00:10:00.023 "raid_level": "concat", 00:10:00.023 "superblock": true, 00:10:00.023 "num_base_bdevs": 3, 00:10:00.023 "num_base_bdevs_discovered": 3, 00:10:00.023 "num_base_bdevs_operational": 3, 00:10:00.023 "base_bdevs_list": [ 00:10:00.023 { 00:10:00.023 "name": "pt1", 00:10:00.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.023 "is_configured": true, 00:10:00.023 "data_offset": 2048, 00:10:00.024 "data_size": 63488 00:10:00.024 }, 00:10:00.024 { 00:10:00.024 "name": "pt2", 00:10:00.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.024 "is_configured": true, 00:10:00.024 "data_offset": 2048, 00:10:00.024 "data_size": 63488 00:10:00.024 }, 00:10:00.024 { 00:10:00.024 "name": "pt3", 00:10:00.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.024 "is_configured": true, 00:10:00.024 "data_offset": 2048, 00:10:00.024 "data_size": 63488 00:10:00.024 } 00:10:00.024 ] 00:10:00.024 } 00:10:00.024 } 00:10:00.024 }' 00:10:00.024 17:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:00.024 pt2 00:10:00.024 pt3' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.024 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:00.284 [2024-11-20 17:44:27.219843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e9d36db6-b424-41b7-9360-4e7f16e7d639 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e9d36db6-b424-41b7-9360-4e7f16e7d639 ']' 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 [2024-11-20 17:44:27.263500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.284 [2024-11-20 17:44:27.263532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.284 [2024-11-20 17:44:27.263619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.284 [2024-11-20 17:44:27.263689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.284 [2024-11-20 17:44:27.263699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:00.284 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.285 [2024-11-20 17:44:27.403372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:00.285 [2024-11-20 17:44:27.405699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:00.285 [2024-11-20 17:44:27.405756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:00.285 [2024-11-20 17:44:27.405813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:00.285 [2024-11-20 17:44:27.405882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:00.285 [2024-11-20 17:44:27.405901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:00.285 [2024-11-20 17:44:27.405918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.285 [2024-11-20 17:44:27.405928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:00.285 request: 00:10:00.285 { 00:10:00.285 "name": "raid_bdev1", 00:10:00.285 "raid_level": "concat", 00:10:00.285 "base_bdevs": [ 00:10:00.285 "malloc1", 00:10:00.285 "malloc2", 00:10:00.285 "malloc3" 00:10:00.285 ], 00:10:00.285 "strip_size_kb": 64, 00:10:00.285 "superblock": false, 00:10:00.285 "method": "bdev_raid_create", 00:10:00.285 "req_id": 1 00:10:00.285 } 00:10:00.285 Got JSON-RPC error response 00:10:00.285 response: 00:10:00.285 { 00:10:00.285 "code": -17, 00:10:00.285 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:00.285 } 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.285 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 [2024-11-20 17:44:27.471176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.545 [2024-11-20 17:44:27.471279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.545 [2024-11-20 17:44:27.471315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:00.545 [2024-11-20 17:44:27.471343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.545 [2024-11-20 17:44:27.473925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.545 [2024-11-20 17:44:27.473998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.545 [2024-11-20 17:44:27.474118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:00.545 [2024-11-20 17:44:27.474192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.545 pt1 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.545 "name": "raid_bdev1", 00:10:00.545 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:00.545 "strip_size_kb": 64, 00:10:00.545 "state": "configuring", 00:10:00.545 "raid_level": "concat", 00:10:00.545 "superblock": true, 00:10:00.545 "num_base_bdevs": 3, 00:10:00.545 "num_base_bdevs_discovered": 1, 00:10:00.545 "num_base_bdevs_operational": 3, 00:10:00.545 "base_bdevs_list": [ 00:10:00.545 { 00:10:00.545 "name": "pt1", 00:10:00.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.545 "is_configured": true, 00:10:00.545 "data_offset": 2048, 00:10:00.545 "data_size": 63488 00:10:00.545 }, 00:10:00.545 { 00:10:00.545 "name": null, 00:10:00.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.545 "is_configured": false, 00:10:00.545 "data_offset": 2048, 00:10:00.545 "data_size": 63488 00:10:00.545 }, 00:10:00.545 { 00:10:00.545 "name": null, 00:10:00.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.545 "is_configured": false, 00:10:00.545 "data_offset": 2048, 00:10:00.545 "data_size": 63488 00:10:00.545 } 00:10:00.545 ] 00:10:00.545 }' 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.545 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.805 [2024-11-20 17:44:27.938461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.805 [2024-11-20 17:44:27.938569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.805 [2024-11-20 17:44:27.938604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:00.805 [2024-11-20 17:44:27.938614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.805 [2024-11-20 17:44:27.939153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.805 [2024-11-20 17:44:27.939172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.805 [2024-11-20 17:44:27.939274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:00.805 [2024-11-20 17:44:27.939307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.805 pt2 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.805 [2024-11-20 17:44:27.950409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.805 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.806 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.065 17:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.065 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.065 "name": "raid_bdev1", 00:10:01.065 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:01.065 "strip_size_kb": 64, 00:10:01.065 "state": "configuring", 00:10:01.065 "raid_level": "concat", 00:10:01.065 "superblock": true, 00:10:01.065 "num_base_bdevs": 3, 00:10:01.065 "num_base_bdevs_discovered": 1, 00:10:01.065 "num_base_bdevs_operational": 3, 00:10:01.065 "base_bdevs_list": [ 00:10:01.065 { 00:10:01.065 "name": "pt1", 00:10:01.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.065 "is_configured": true, 00:10:01.065 "data_offset": 2048, 00:10:01.065 "data_size": 63488 00:10:01.065 }, 00:10:01.065 { 00:10:01.065 "name": null, 00:10:01.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.065 "is_configured": false, 00:10:01.065 "data_offset": 0, 00:10:01.065 "data_size": 63488 00:10:01.065 }, 00:10:01.065 { 00:10:01.065 "name": null, 00:10:01.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.065 "is_configured": false, 00:10:01.065 "data_offset": 2048, 00:10:01.065 "data_size": 63488 00:10:01.065 } 00:10:01.065 ] 00:10:01.065 }' 00:10:01.065 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.065 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.367 [2024-11-20 17:44:28.369696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.367 [2024-11-20 17:44:28.369918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.367 [2024-11-20 17:44:28.369961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:01.367 [2024-11-20 17:44:28.369998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.367 [2024-11-20 17:44:28.370628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.367 [2024-11-20 17:44:28.370702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.367 [2024-11-20 17:44:28.370847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:01.367 [2024-11-20 17:44:28.370907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.367 pt2 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.367 [2024-11-20 17:44:28.381647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:01.367 [2024-11-20 17:44:28.381764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.367 [2024-11-20 17:44:28.381802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:01.367 [2024-11-20 17:44:28.381850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.367 [2024-11-20 17:44:28.382430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.367 [2024-11-20 17:44:28.382507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:01.367 [2024-11-20 17:44:28.382633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:01.367 [2024-11-20 17:44:28.382694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:01.367 [2024-11-20 17:44:28.382885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.367 [2024-11-20 17:44:28.382902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:01.367 [2024-11-20 17:44:28.383274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:01.367 [2024-11-20 17:44:28.383488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.367 [2024-11-20 17:44:28.383499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:01.367 [2024-11-20 17:44:28.383688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.367 pt3 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.367 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.367 "name": "raid_bdev1", 00:10:01.367 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:01.367 "strip_size_kb": 64, 00:10:01.367 "state": "online", 00:10:01.367 "raid_level": "concat", 00:10:01.367 "superblock": true, 00:10:01.367 "num_base_bdevs": 3, 00:10:01.367 "num_base_bdevs_discovered": 3, 00:10:01.367 "num_base_bdevs_operational": 3, 00:10:01.367 "base_bdevs_list": [ 00:10:01.367 { 00:10:01.367 "name": "pt1", 00:10:01.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.367 "is_configured": true, 00:10:01.367 "data_offset": 2048, 00:10:01.367 "data_size": 63488 00:10:01.368 }, 00:10:01.368 { 00:10:01.368 "name": "pt2", 00:10:01.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.368 "is_configured": true, 00:10:01.368 "data_offset": 2048, 00:10:01.368 "data_size": 63488 00:10:01.368 }, 00:10:01.368 { 00:10:01.368 "name": "pt3", 00:10:01.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.368 "is_configured": true, 00:10:01.368 "data_offset": 2048, 00:10:01.368 "data_size": 63488 00:10:01.368 } 00:10:01.368 ] 00:10:01.368 }' 00:10:01.368 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.368 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.937 [2024-11-20 17:44:28.829325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.937 "name": "raid_bdev1", 00:10:01.937 "aliases": [ 00:10:01.937 "e9d36db6-b424-41b7-9360-4e7f16e7d639" 00:10:01.937 ], 00:10:01.937 "product_name": "Raid Volume", 00:10:01.937 "block_size": 512, 00:10:01.937 "num_blocks": 190464, 00:10:01.937 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:01.937 "assigned_rate_limits": { 00:10:01.937 "rw_ios_per_sec": 0, 00:10:01.937 "rw_mbytes_per_sec": 0, 00:10:01.937 "r_mbytes_per_sec": 0, 00:10:01.937 "w_mbytes_per_sec": 0 00:10:01.937 }, 00:10:01.937 "claimed": false, 00:10:01.937 "zoned": false, 00:10:01.937 "supported_io_types": { 00:10:01.937 "read": true, 00:10:01.937 "write": true, 00:10:01.937 "unmap": true, 00:10:01.937 "flush": true, 00:10:01.937 "reset": true, 00:10:01.937 "nvme_admin": false, 00:10:01.937 "nvme_io": false, 00:10:01.937 "nvme_io_md": false, 00:10:01.937 "write_zeroes": true, 00:10:01.937 "zcopy": false, 00:10:01.937 "get_zone_info": false, 00:10:01.937 "zone_management": false, 00:10:01.937 "zone_append": false, 00:10:01.937 "compare": false, 00:10:01.937 "compare_and_write": false, 00:10:01.937 "abort": false, 00:10:01.937 "seek_hole": false, 00:10:01.937 "seek_data": false, 00:10:01.937 "copy": false, 00:10:01.937 "nvme_iov_md": false 00:10:01.937 }, 00:10:01.937 "memory_domains": [ 00:10:01.937 { 00:10:01.937 "dma_device_id": "system", 00:10:01.937 "dma_device_type": 1 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.937 "dma_device_type": 2 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "dma_device_id": "system", 00:10:01.937 "dma_device_type": 1 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.937 "dma_device_type": 2 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "dma_device_id": "system", 00:10:01.937 "dma_device_type": 1 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.937 "dma_device_type": 2 00:10:01.937 } 00:10:01.937 ], 00:10:01.937 "driver_specific": { 00:10:01.937 "raid": { 00:10:01.937 "uuid": "e9d36db6-b424-41b7-9360-4e7f16e7d639", 00:10:01.937 "strip_size_kb": 64, 00:10:01.937 "state": "online", 00:10:01.937 "raid_level": "concat", 00:10:01.937 "superblock": true, 00:10:01.937 "num_base_bdevs": 3, 00:10:01.937 "num_base_bdevs_discovered": 3, 00:10:01.937 "num_base_bdevs_operational": 3, 00:10:01.937 "base_bdevs_list": [ 00:10:01.937 { 00:10:01.937 "name": "pt1", 00:10:01.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.937 "is_configured": true, 00:10:01.937 "data_offset": 2048, 00:10:01.937 "data_size": 63488 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "name": "pt2", 00:10:01.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.937 "is_configured": true, 00:10:01.937 "data_offset": 2048, 00:10:01.937 "data_size": 63488 00:10:01.937 }, 00:10:01.937 { 00:10:01.937 "name": "pt3", 00:10:01.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.937 "is_configured": true, 00:10:01.937 "data_offset": 2048, 00:10:01.937 "data_size": 63488 00:10:01.937 } 00:10:01.937 ] 00:10:01.937 } 00:10:01.937 } 00:10:01.937 }' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:01.937 pt2 00:10:01.937 pt3' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.937 17:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.937 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.938 [2024-11-20 17:44:29.092914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e9d36db6-b424-41b7-9360-4e7f16e7d639 '!=' e9d36db6-b424-41b7-9360-4e7f16e7d639 ']' 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67245 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67245 ']' 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67245 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67245 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67245' 00:10:02.198 killing process with pid 67245 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67245 00:10:02.198 [2024-11-20 17:44:29.153826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.198 17:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67245 00:10:02.198 [2024-11-20 17:44:29.154101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.198 [2024-11-20 17:44:29.154231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.198 [2024-11-20 17:44:29.154282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:02.457 [2024-11-20 17:44:29.534695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.836 17:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:03.836 00:10:03.836 real 0m5.416s 00:10:03.836 user 0m7.488s 00:10:03.836 sys 0m0.997s 00:10:03.836 17:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.836 ************************************ 00:10:03.836 END TEST raid_superblock_test 00:10:03.836 ************************************ 00:10:03.836 17:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.836 17:44:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:03.836 17:44:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.836 17:44:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.836 17:44:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.836 ************************************ 00:10:03.836 START TEST raid_read_error_test 00:10:03.836 ************************************ 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:03.836 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kf0T1BrevG 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67504 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67504 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67504 ']' 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.837 17:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.837 [2024-11-20 17:44:30.975291] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:03.837 [2024-11-20 17:44:30.975942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67504 ] 00:10:04.096 [2024-11-20 17:44:31.150239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.356 [2024-11-20 17:44:31.293743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.616 [2024-11-20 17:44:31.542935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.616 [2024-11-20 17:44:31.543000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.876 BaseBdev1_malloc 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.876 true 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.876 [2024-11-20 17:44:31.889684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.876 [2024-11-20 17:44:31.889846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.876 [2024-11-20 17:44:31.889889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:04.876 [2024-11-20 17:44:31.889922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.876 [2024-11-20 17:44:31.892316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.876 [2024-11-20 17:44:31.892412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.876 BaseBdev1 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.876 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.877 BaseBdev2_malloc 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.877 true 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.877 [2024-11-20 17:44:31.964983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.877 [2024-11-20 17:44:31.965163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.877 [2024-11-20 17:44:31.965190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:04.877 [2024-11-20 17:44:31.965204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.877 [2024-11-20 17:44:31.967766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.877 [2024-11-20 17:44:31.967808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.877 BaseBdev2 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.877 17:44:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.877 BaseBdev3_malloc 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.877 true 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.877 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.137 [2024-11-20 17:44:32.051865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:05.137 [2024-11-20 17:44:32.051935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.137 [2024-11-20 17:44:32.051955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:05.137 [2024-11-20 17:44:32.051968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.137 [2024-11-20 17:44:32.054384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.137 [2024-11-20 17:44:32.054519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:05.137 BaseBdev3 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.137 [2024-11-20 17:44:32.063926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.137 [2024-11-20 17:44:32.065998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.137 [2024-11-20 17:44:32.066088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.137 [2024-11-20 17:44:32.066298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.137 [2024-11-20 17:44:32.066310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:05.137 [2024-11-20 17:44:32.066588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:05.137 [2024-11-20 17:44:32.066755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.137 [2024-11-20 17:44:32.066770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:05.137 [2024-11-20 17:44:32.066912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.137 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.138 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.138 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.138 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.138 "name": "raid_bdev1", 00:10:05.138 "uuid": "81fa2925-8729-46d6-adf5-6d0909675e45", 00:10:05.138 "strip_size_kb": 64, 00:10:05.138 "state": "online", 00:10:05.138 "raid_level": "concat", 00:10:05.138 "superblock": true, 00:10:05.138 "num_base_bdevs": 3, 00:10:05.138 "num_base_bdevs_discovered": 3, 00:10:05.138 "num_base_bdevs_operational": 3, 00:10:05.138 "base_bdevs_list": [ 00:10:05.138 { 00:10:05.138 "name": "BaseBdev1", 00:10:05.138 "uuid": "e4332136-5991-5eff-b10e-37f923f9cd4f", 00:10:05.138 "is_configured": true, 00:10:05.138 "data_offset": 2048, 00:10:05.138 "data_size": 63488 00:10:05.138 }, 00:10:05.138 { 00:10:05.138 "name": "BaseBdev2", 00:10:05.138 "uuid": "7e73bfbf-1b6c-5178-9c6b-e5a67435b8b0", 00:10:05.138 "is_configured": true, 00:10:05.138 "data_offset": 2048, 00:10:05.138 "data_size": 63488 00:10:05.138 }, 00:10:05.138 { 00:10:05.138 "name": "BaseBdev3", 00:10:05.138 "uuid": "7db851b2-64d9-5582-8703-18aee0c912bd", 00:10:05.138 "is_configured": true, 00:10:05.138 "data_offset": 2048, 00:10:05.138 "data_size": 63488 00:10:05.138 } 00:10:05.138 ] 00:10:05.138 }' 00:10:05.138 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.138 17:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.399 17:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.399 [2024-11-20 17:44:32.552562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.338 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.598 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.598 "name": "raid_bdev1", 00:10:06.598 "uuid": "81fa2925-8729-46d6-adf5-6d0909675e45", 00:10:06.598 "strip_size_kb": 64, 00:10:06.598 "state": "online", 00:10:06.598 "raid_level": "concat", 00:10:06.598 "superblock": true, 00:10:06.598 "num_base_bdevs": 3, 00:10:06.598 "num_base_bdevs_discovered": 3, 00:10:06.598 "num_base_bdevs_operational": 3, 00:10:06.598 "base_bdevs_list": [ 00:10:06.598 { 00:10:06.598 "name": "BaseBdev1", 00:10:06.598 "uuid": "e4332136-5991-5eff-b10e-37f923f9cd4f", 00:10:06.598 "is_configured": true, 00:10:06.598 "data_offset": 2048, 00:10:06.598 "data_size": 63488 00:10:06.598 }, 00:10:06.598 { 00:10:06.598 "name": "BaseBdev2", 00:10:06.598 "uuid": "7e73bfbf-1b6c-5178-9c6b-e5a67435b8b0", 00:10:06.598 "is_configured": true, 00:10:06.598 "data_offset": 2048, 00:10:06.598 "data_size": 63488 00:10:06.598 }, 00:10:06.598 { 00:10:06.598 "name": "BaseBdev3", 00:10:06.598 "uuid": "7db851b2-64d9-5582-8703-18aee0c912bd", 00:10:06.598 "is_configured": true, 00:10:06.598 "data_offset": 2048, 00:10:06.598 "data_size": 63488 00:10:06.598 } 00:10:06.598 ] 00:10:06.598 }' 00:10:06.598 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.598 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.858 [2024-11-20 17:44:33.957520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.858 [2024-11-20 17:44:33.957689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.858 [2024-11-20 17:44:33.960472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.858 [2024-11-20 17:44:33.960530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.858 [2024-11-20 17:44:33.960573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.858 [2024-11-20 17:44:33.960583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:06.858 { 00:10:06.858 "results": [ 00:10:06.858 { 00:10:06.858 "job": "raid_bdev1", 00:10:06.858 "core_mask": "0x1", 00:10:06.858 "workload": "randrw", 00:10:06.858 "percentage": 50, 00:10:06.858 "status": "finished", 00:10:06.858 "queue_depth": 1, 00:10:06.858 "io_size": 131072, 00:10:06.858 "runtime": 1.405618, 00:10:06.858 "iops": 13271.03096289319, 00:10:06.858 "mibps": 1658.8788703616488, 00:10:06.858 "io_failed": 1, 00:10:06.858 "io_timeout": 0, 00:10:06.858 "avg_latency_us": 105.82488790366094, 00:10:06.858 "min_latency_us": 26.270742358078603, 00:10:06.858 "max_latency_us": 1509.6174672489083 00:10:06.858 } 00:10:06.858 ], 00:10:06.858 "core_count": 1 00:10:06.858 } 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67504 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67504 ']' 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67504 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.858 17:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67504 00:10:06.858 17:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.858 17:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.858 17:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67504' 00:10:06.858 killing process with pid 67504 00:10:06.858 17:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67504 00:10:06.858 [2024-11-20 17:44:34.009118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.858 17:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67504 00:10:07.119 [2024-11-20 17:44:34.265069] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kf0T1BrevG 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.496 ************************************ 00:10:08.496 END TEST raid_read_error_test 00:10:08.496 ************************************ 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:08.496 00:10:08.496 real 0m4.723s 00:10:08.496 user 0m5.439s 00:10:08.496 sys 0m0.680s 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.496 17:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.496 17:44:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:08.496 17:44:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:08.496 17:44:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.496 17:44:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.496 ************************************ 00:10:08.496 START TEST raid_write_error_test 00:10:08.496 ************************************ 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.496 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AM8eiXyJ6X 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67644 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67644 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67644 ']' 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.761 17:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.761 [2024-11-20 17:44:35.770581] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:08.761 [2024-11-20 17:44:35.770797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67644 ] 00:10:09.030 [2024-11-20 17:44:35.950714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.030 [2024-11-20 17:44:36.095616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.289 [2024-11-20 17:44:36.334165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.289 [2024-11-20 17:44:36.334351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.549 BaseBdev1_malloc 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.549 true 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.549 [2024-11-20 17:44:36.683656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.549 [2024-11-20 17:44:36.683741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.549 [2024-11-20 17:44:36.683764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:09.549 [2024-11-20 17:44:36.683777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.549 [2024-11-20 17:44:36.686289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.549 [2024-11-20 17:44:36.686413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.549 BaseBdev1 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.549 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.809 BaseBdev2_malloc 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.809 true 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.809 [2024-11-20 17:44:36.757780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:09.809 [2024-11-20 17:44:36.757854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.809 [2024-11-20 17:44:36.757872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:09.809 [2024-11-20 17:44:36.757884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.809 [2024-11-20 17:44:36.760533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.809 [2024-11-20 17:44:36.760643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:09.809 BaseBdev2 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.809 BaseBdev3_malloc 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.809 true 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.809 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.809 [2024-11-20 17:44:36.842631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:09.809 [2024-11-20 17:44:36.842698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.809 [2024-11-20 17:44:36.842716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:09.809 [2024-11-20 17:44:36.842729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.809 [2024-11-20 17:44:36.845338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.809 [2024-11-20 17:44:36.845468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:09.809 BaseBdev3 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.810 [2024-11-20 17:44:36.854704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.810 [2024-11-20 17:44:36.856852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.810 [2024-11-20 17:44:36.857001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.810 [2024-11-20 17:44:36.857245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.810 [2024-11-20 17:44:36.857259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:09.810 [2024-11-20 17:44:36.857529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:09.810 [2024-11-20 17:44:36.857702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.810 [2024-11-20 17:44:36.857716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:09.810 [2024-11-20 17:44:36.857890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.810 "name": "raid_bdev1", 00:10:09.810 "uuid": "6dc95084-637d-4b05-a614-d1e7deda579f", 00:10:09.810 "strip_size_kb": 64, 00:10:09.810 "state": "online", 00:10:09.810 "raid_level": "concat", 00:10:09.810 "superblock": true, 00:10:09.810 "num_base_bdevs": 3, 00:10:09.810 "num_base_bdevs_discovered": 3, 00:10:09.810 "num_base_bdevs_operational": 3, 00:10:09.810 "base_bdevs_list": [ 00:10:09.810 { 00:10:09.810 "name": "BaseBdev1", 00:10:09.810 "uuid": "7ff8d0a4-6185-5538-a440-c256fa300e2b", 00:10:09.810 "is_configured": true, 00:10:09.810 "data_offset": 2048, 00:10:09.810 "data_size": 63488 00:10:09.810 }, 00:10:09.810 { 00:10:09.810 "name": "BaseBdev2", 00:10:09.810 "uuid": "78835217-38d5-5646-872b-533a9759465f", 00:10:09.810 "is_configured": true, 00:10:09.810 "data_offset": 2048, 00:10:09.810 "data_size": 63488 00:10:09.810 }, 00:10:09.810 { 00:10:09.810 "name": "BaseBdev3", 00:10:09.810 "uuid": "a3946c38-c437-5f01-8206-e605a913d101", 00:10:09.810 "is_configured": true, 00:10:09.810 "data_offset": 2048, 00:10:09.810 "data_size": 63488 00:10:09.810 } 00:10:09.810 ] 00:10:09.810 }' 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.810 17:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.379 17:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:10.379 17:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:10.379 [2024-11-20 17:44:37.379377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.317 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.317 "name": "raid_bdev1", 00:10:11.317 "uuid": "6dc95084-637d-4b05-a614-d1e7deda579f", 00:10:11.317 "strip_size_kb": 64, 00:10:11.317 "state": "online", 00:10:11.317 "raid_level": "concat", 00:10:11.317 "superblock": true, 00:10:11.317 "num_base_bdevs": 3, 00:10:11.317 "num_base_bdevs_discovered": 3, 00:10:11.317 "num_base_bdevs_operational": 3, 00:10:11.317 "base_bdevs_list": [ 00:10:11.317 { 00:10:11.317 "name": "BaseBdev1", 00:10:11.317 "uuid": "7ff8d0a4-6185-5538-a440-c256fa300e2b", 00:10:11.317 "is_configured": true, 00:10:11.317 "data_offset": 2048, 00:10:11.318 "data_size": 63488 00:10:11.318 }, 00:10:11.318 { 00:10:11.318 "name": "BaseBdev2", 00:10:11.318 "uuid": "78835217-38d5-5646-872b-533a9759465f", 00:10:11.318 "is_configured": true, 00:10:11.318 "data_offset": 2048, 00:10:11.318 "data_size": 63488 00:10:11.318 }, 00:10:11.318 { 00:10:11.318 "name": "BaseBdev3", 00:10:11.318 "uuid": "a3946c38-c437-5f01-8206-e605a913d101", 00:10:11.318 "is_configured": true, 00:10:11.318 "data_offset": 2048, 00:10:11.318 "data_size": 63488 00:10:11.318 } 00:10:11.318 ] 00:10:11.318 }' 00:10:11.318 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.318 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.888 [2024-11-20 17:44:38.801612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.888 [2024-11-20 17:44:38.801769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.888 [2024-11-20 17:44:38.804419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.888 [2024-11-20 17:44:38.804464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.888 [2024-11-20 17:44:38.804506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.888 [2024-11-20 17:44:38.804519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:11.888 { 00:10:11.888 "results": [ 00:10:11.888 { 00:10:11.888 "job": "raid_bdev1", 00:10:11.888 "core_mask": "0x1", 00:10:11.888 "workload": "randrw", 00:10:11.888 "percentage": 50, 00:10:11.888 "status": "finished", 00:10:11.888 "queue_depth": 1, 00:10:11.888 "io_size": 131072, 00:10:11.888 "runtime": 1.422733, 00:10:11.888 "iops": 13061.480966562243, 00:10:11.888 "mibps": 1632.6851208202804, 00:10:11.888 "io_failed": 1, 00:10:11.888 "io_timeout": 0, 00:10:11.888 "avg_latency_us": 107.45323074551617, 00:10:11.888 "min_latency_us": 25.9353711790393, 00:10:11.888 "max_latency_us": 1717.1004366812226 00:10:11.888 } 00:10:11.888 ], 00:10:11.888 "core_count": 1 00:10:11.888 } 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67644 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67644 ']' 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67644 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67644 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67644' 00:10:11.888 killing process with pid 67644 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67644 00:10:11.888 [2024-11-20 17:44:38.854786] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.888 17:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67644 00:10:12.148 [2024-11-20 17:44:39.115723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AM8eiXyJ6X 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:13.528 00:10:13.528 real 0m4.793s 00:10:13.528 user 0m5.529s 00:10:13.528 sys 0m0.698s 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.528 17:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.528 ************************************ 00:10:13.528 END TEST raid_write_error_test 00:10:13.528 ************************************ 00:10:13.528 17:44:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:13.528 17:44:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:13.528 17:44:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:13.528 17:44:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.528 17:44:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.528 ************************************ 00:10:13.528 START TEST raid_state_function_test 00:10:13.528 ************************************ 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67793 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67793' 00:10:13.528 Process raid pid: 67793 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67793 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67793 ']' 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.528 17:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.528 [2024-11-20 17:44:40.624612] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:13.528 [2024-11-20 17:44:40.625235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.788 [2024-11-20 17:44:40.797946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.788 [2024-11-20 17:44:40.943913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.047 [2024-11-20 17:44:41.196872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.047 [2024-11-20 17:44:41.197082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.307 [2024-11-20 17:44:41.476034] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.307 [2024-11-20 17:44:41.476214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.307 [2024-11-20 17:44:41.476249] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.307 [2024-11-20 17:44:41.476275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.307 [2024-11-20 17:44:41.476303] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.307 [2024-11-20 17:44:41.476327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.307 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.567 "name": "Existed_Raid", 00:10:14.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.567 "strip_size_kb": 0, 00:10:14.567 "state": "configuring", 00:10:14.567 "raid_level": "raid1", 00:10:14.567 "superblock": false, 00:10:14.567 "num_base_bdevs": 3, 00:10:14.567 "num_base_bdevs_discovered": 0, 00:10:14.567 "num_base_bdevs_operational": 3, 00:10:14.567 "base_bdevs_list": [ 00:10:14.567 { 00:10:14.567 "name": "BaseBdev1", 00:10:14.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.567 "is_configured": false, 00:10:14.567 "data_offset": 0, 00:10:14.567 "data_size": 0 00:10:14.567 }, 00:10:14.567 { 00:10:14.567 "name": "BaseBdev2", 00:10:14.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.567 "is_configured": false, 00:10:14.567 "data_offset": 0, 00:10:14.567 "data_size": 0 00:10:14.567 }, 00:10:14.567 { 00:10:14.567 "name": "BaseBdev3", 00:10:14.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.567 "is_configured": false, 00:10:14.567 "data_offset": 0, 00:10:14.567 "data_size": 0 00:10:14.567 } 00:10:14.567 ] 00:10:14.567 }' 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.567 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 [2024-11-20 17:44:41.915257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.827 [2024-11-20 17:44:41.915317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 [2024-11-20 17:44:41.927200] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.827 [2024-11-20 17:44:41.927263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.827 [2024-11-20 17:44:41.927273] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.827 [2024-11-20 17:44:41.927283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.827 [2024-11-20 17:44:41.927289] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.827 [2024-11-20 17:44:41.927300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 [2024-11-20 17:44:41.981783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.827 BaseBdev1 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.827 17:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.087 [ 00:10:15.087 { 00:10:15.087 "name": "BaseBdev1", 00:10:15.087 "aliases": [ 00:10:15.087 "0effaebc-f737-4345-ace1-bf355f854eeb" 00:10:15.087 ], 00:10:15.087 "product_name": "Malloc disk", 00:10:15.087 "block_size": 512, 00:10:15.087 "num_blocks": 65536, 00:10:15.087 "uuid": "0effaebc-f737-4345-ace1-bf355f854eeb", 00:10:15.087 "assigned_rate_limits": { 00:10:15.087 "rw_ios_per_sec": 0, 00:10:15.087 "rw_mbytes_per_sec": 0, 00:10:15.087 "r_mbytes_per_sec": 0, 00:10:15.087 "w_mbytes_per_sec": 0 00:10:15.087 }, 00:10:15.087 "claimed": true, 00:10:15.087 "claim_type": "exclusive_write", 00:10:15.087 "zoned": false, 00:10:15.087 "supported_io_types": { 00:10:15.087 "read": true, 00:10:15.087 "write": true, 00:10:15.087 "unmap": true, 00:10:15.087 "flush": true, 00:10:15.087 "reset": true, 00:10:15.087 "nvme_admin": false, 00:10:15.087 "nvme_io": false, 00:10:15.087 "nvme_io_md": false, 00:10:15.087 "write_zeroes": true, 00:10:15.087 "zcopy": true, 00:10:15.087 "get_zone_info": false, 00:10:15.087 "zone_management": false, 00:10:15.087 "zone_append": false, 00:10:15.087 "compare": false, 00:10:15.087 "compare_and_write": false, 00:10:15.087 "abort": true, 00:10:15.087 "seek_hole": false, 00:10:15.087 "seek_data": false, 00:10:15.087 "copy": true, 00:10:15.087 "nvme_iov_md": false 00:10:15.087 }, 00:10:15.087 "memory_domains": [ 00:10:15.087 { 00:10:15.087 "dma_device_id": "system", 00:10:15.087 "dma_device_type": 1 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.087 "dma_device_type": 2 00:10:15.087 } 00:10:15.087 ], 00:10:15.087 "driver_specific": {} 00:10:15.087 } 00:10:15.087 ] 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.087 "name": "Existed_Raid", 00:10:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.087 "strip_size_kb": 0, 00:10:15.087 "state": "configuring", 00:10:15.087 "raid_level": "raid1", 00:10:15.087 "superblock": false, 00:10:15.087 "num_base_bdevs": 3, 00:10:15.087 "num_base_bdevs_discovered": 1, 00:10:15.087 "num_base_bdevs_operational": 3, 00:10:15.087 "base_bdevs_list": [ 00:10:15.087 { 00:10:15.087 "name": "BaseBdev1", 00:10:15.087 "uuid": "0effaebc-f737-4345-ace1-bf355f854eeb", 00:10:15.087 "is_configured": true, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 65536 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "name": "BaseBdev2", 00:10:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.087 "is_configured": false, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 0 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "name": "BaseBdev3", 00:10:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.087 "is_configured": false, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 0 00:10:15.087 } 00:10:15.087 ] 00:10:15.087 }' 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.087 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 [2024-11-20 17:44:42.485050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.347 [2024-11-20 17:44:42.485226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 [2024-11-20 17:44:42.493041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.347 [2024-11-20 17:44:42.495366] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.347 [2024-11-20 17:44:42.495451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.347 [2024-11-20 17:44:42.495486] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.347 [2024-11-20 17:44:42.495510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.347 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.348 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.607 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.607 "name": "Existed_Raid", 00:10:15.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.607 "strip_size_kb": 0, 00:10:15.607 "state": "configuring", 00:10:15.607 "raid_level": "raid1", 00:10:15.607 "superblock": false, 00:10:15.607 "num_base_bdevs": 3, 00:10:15.607 "num_base_bdevs_discovered": 1, 00:10:15.607 "num_base_bdevs_operational": 3, 00:10:15.607 "base_bdevs_list": [ 00:10:15.607 { 00:10:15.607 "name": "BaseBdev1", 00:10:15.607 "uuid": "0effaebc-f737-4345-ace1-bf355f854eeb", 00:10:15.607 "is_configured": true, 00:10:15.607 "data_offset": 0, 00:10:15.607 "data_size": 65536 00:10:15.607 }, 00:10:15.607 { 00:10:15.607 "name": "BaseBdev2", 00:10:15.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.607 "is_configured": false, 00:10:15.607 "data_offset": 0, 00:10:15.607 "data_size": 0 00:10:15.607 }, 00:10:15.607 { 00:10:15.607 "name": "BaseBdev3", 00:10:15.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.607 "is_configured": false, 00:10:15.607 "data_offset": 0, 00:10:15.607 "data_size": 0 00:10:15.607 } 00:10:15.607 ] 00:10:15.607 }' 00:10:15.607 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.607 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 [2024-11-20 17:44:42.949286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.868 BaseBdev2 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 [ 00:10:15.868 { 00:10:15.868 "name": "BaseBdev2", 00:10:15.868 "aliases": [ 00:10:15.868 "7f170875-2822-48e3-8821-5be327934541" 00:10:15.868 ], 00:10:15.868 "product_name": "Malloc disk", 00:10:15.868 "block_size": 512, 00:10:15.868 "num_blocks": 65536, 00:10:15.868 "uuid": "7f170875-2822-48e3-8821-5be327934541", 00:10:15.868 "assigned_rate_limits": { 00:10:15.868 "rw_ios_per_sec": 0, 00:10:15.868 "rw_mbytes_per_sec": 0, 00:10:15.868 "r_mbytes_per_sec": 0, 00:10:15.868 "w_mbytes_per_sec": 0 00:10:15.868 }, 00:10:15.868 "claimed": true, 00:10:15.868 "claim_type": "exclusive_write", 00:10:15.868 "zoned": false, 00:10:15.868 "supported_io_types": { 00:10:15.868 "read": true, 00:10:15.868 "write": true, 00:10:15.868 "unmap": true, 00:10:15.868 "flush": true, 00:10:15.868 "reset": true, 00:10:15.868 "nvme_admin": false, 00:10:15.868 "nvme_io": false, 00:10:15.868 "nvme_io_md": false, 00:10:15.868 "write_zeroes": true, 00:10:15.868 "zcopy": true, 00:10:15.868 "get_zone_info": false, 00:10:15.868 "zone_management": false, 00:10:15.868 "zone_append": false, 00:10:15.868 "compare": false, 00:10:15.868 "compare_and_write": false, 00:10:15.868 "abort": true, 00:10:15.868 "seek_hole": false, 00:10:15.868 "seek_data": false, 00:10:15.868 "copy": true, 00:10:15.868 "nvme_iov_md": false 00:10:15.868 }, 00:10:15.868 "memory_domains": [ 00:10:15.868 { 00:10:15.868 "dma_device_id": "system", 00:10:15.868 "dma_device_type": 1 00:10:15.868 }, 00:10:15.868 { 00:10:15.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.868 "dma_device_type": 2 00:10:15.868 } 00:10:15.868 ], 00:10:15.868 "driver_specific": {} 00:10:15.868 } 00:10:15.868 ] 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 17:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.868 "name": "Existed_Raid", 00:10:15.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.868 "strip_size_kb": 0, 00:10:15.868 "state": "configuring", 00:10:15.868 "raid_level": "raid1", 00:10:15.868 "superblock": false, 00:10:15.868 "num_base_bdevs": 3, 00:10:15.868 "num_base_bdevs_discovered": 2, 00:10:15.868 "num_base_bdevs_operational": 3, 00:10:15.868 "base_bdevs_list": [ 00:10:15.868 { 00:10:15.868 "name": "BaseBdev1", 00:10:15.868 "uuid": "0effaebc-f737-4345-ace1-bf355f854eeb", 00:10:15.868 "is_configured": true, 00:10:15.868 "data_offset": 0, 00:10:15.868 "data_size": 65536 00:10:15.868 }, 00:10:15.868 { 00:10:15.868 "name": "BaseBdev2", 00:10:15.868 "uuid": "7f170875-2822-48e3-8821-5be327934541", 00:10:15.868 "is_configured": true, 00:10:15.868 "data_offset": 0, 00:10:15.868 "data_size": 65536 00:10:15.868 }, 00:10:15.868 { 00:10:15.868 "name": "BaseBdev3", 00:10:15.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.868 "is_configured": false, 00:10:15.868 "data_offset": 0, 00:10:15.868 "data_size": 0 00:10:15.868 } 00:10:15.868 ] 00:10:15.868 }' 00:10:15.868 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.868 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.438 [2024-11-20 17:44:43.477784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.438 [2024-11-20 17:44:43.477951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.438 [2024-11-20 17:44:43.477987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:16.438 [2024-11-20 17:44:43.478421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.438 [2024-11-20 17:44:43.478628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.438 [2024-11-20 17:44:43.478639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.438 [2024-11-20 17:44:43.478958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.438 BaseBdev3 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.438 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.438 [ 00:10:16.438 { 00:10:16.438 "name": "BaseBdev3", 00:10:16.438 "aliases": [ 00:10:16.438 "1489e2ad-0227-49b7-b758-d8df825fcd95" 00:10:16.438 ], 00:10:16.438 "product_name": "Malloc disk", 00:10:16.438 "block_size": 512, 00:10:16.438 "num_blocks": 65536, 00:10:16.438 "uuid": "1489e2ad-0227-49b7-b758-d8df825fcd95", 00:10:16.438 "assigned_rate_limits": { 00:10:16.438 "rw_ios_per_sec": 0, 00:10:16.438 "rw_mbytes_per_sec": 0, 00:10:16.438 "r_mbytes_per_sec": 0, 00:10:16.438 "w_mbytes_per_sec": 0 00:10:16.438 }, 00:10:16.438 "claimed": true, 00:10:16.438 "claim_type": "exclusive_write", 00:10:16.438 "zoned": false, 00:10:16.438 "supported_io_types": { 00:10:16.438 "read": true, 00:10:16.438 "write": true, 00:10:16.438 "unmap": true, 00:10:16.438 "flush": true, 00:10:16.438 "reset": true, 00:10:16.438 "nvme_admin": false, 00:10:16.438 "nvme_io": false, 00:10:16.438 "nvme_io_md": false, 00:10:16.438 "write_zeroes": true, 00:10:16.438 "zcopy": true, 00:10:16.438 "get_zone_info": false, 00:10:16.438 "zone_management": false, 00:10:16.438 "zone_append": false, 00:10:16.439 "compare": false, 00:10:16.439 "compare_and_write": false, 00:10:16.439 "abort": true, 00:10:16.439 "seek_hole": false, 00:10:16.439 "seek_data": false, 00:10:16.439 "copy": true, 00:10:16.439 "nvme_iov_md": false 00:10:16.439 }, 00:10:16.439 "memory_domains": [ 00:10:16.439 { 00:10:16.439 "dma_device_id": "system", 00:10:16.439 "dma_device_type": 1 00:10:16.439 }, 00:10:16.439 { 00:10:16.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.439 "dma_device_type": 2 00:10:16.439 } 00:10:16.439 ], 00:10:16.439 "driver_specific": {} 00:10:16.439 } 00:10:16.439 ] 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.439 "name": "Existed_Raid", 00:10:16.439 "uuid": "17d0c52e-00e5-4ccb-ab05-5cd885742246", 00:10:16.439 "strip_size_kb": 0, 00:10:16.439 "state": "online", 00:10:16.439 "raid_level": "raid1", 00:10:16.439 "superblock": false, 00:10:16.439 "num_base_bdevs": 3, 00:10:16.439 "num_base_bdevs_discovered": 3, 00:10:16.439 "num_base_bdevs_operational": 3, 00:10:16.439 "base_bdevs_list": [ 00:10:16.439 { 00:10:16.439 "name": "BaseBdev1", 00:10:16.439 "uuid": "0effaebc-f737-4345-ace1-bf355f854eeb", 00:10:16.439 "is_configured": true, 00:10:16.439 "data_offset": 0, 00:10:16.439 "data_size": 65536 00:10:16.439 }, 00:10:16.439 { 00:10:16.439 "name": "BaseBdev2", 00:10:16.439 "uuid": "7f170875-2822-48e3-8821-5be327934541", 00:10:16.439 "is_configured": true, 00:10:16.439 "data_offset": 0, 00:10:16.439 "data_size": 65536 00:10:16.439 }, 00:10:16.439 { 00:10:16.439 "name": "BaseBdev3", 00:10:16.439 "uuid": "1489e2ad-0227-49b7-b758-d8df825fcd95", 00:10:16.439 "is_configured": true, 00:10:16.439 "data_offset": 0, 00:10:16.439 "data_size": 65536 00:10:16.439 } 00:10:16.439 ] 00:10:16.439 }' 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.439 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.008 [2024-11-20 17:44:43.957527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.008 "name": "Existed_Raid", 00:10:17.008 "aliases": [ 00:10:17.008 "17d0c52e-00e5-4ccb-ab05-5cd885742246" 00:10:17.008 ], 00:10:17.008 "product_name": "Raid Volume", 00:10:17.008 "block_size": 512, 00:10:17.008 "num_blocks": 65536, 00:10:17.008 "uuid": "17d0c52e-00e5-4ccb-ab05-5cd885742246", 00:10:17.008 "assigned_rate_limits": { 00:10:17.008 "rw_ios_per_sec": 0, 00:10:17.008 "rw_mbytes_per_sec": 0, 00:10:17.008 "r_mbytes_per_sec": 0, 00:10:17.008 "w_mbytes_per_sec": 0 00:10:17.008 }, 00:10:17.008 "claimed": false, 00:10:17.008 "zoned": false, 00:10:17.008 "supported_io_types": { 00:10:17.008 "read": true, 00:10:17.008 "write": true, 00:10:17.008 "unmap": false, 00:10:17.008 "flush": false, 00:10:17.008 "reset": true, 00:10:17.008 "nvme_admin": false, 00:10:17.008 "nvme_io": false, 00:10:17.008 "nvme_io_md": false, 00:10:17.008 "write_zeroes": true, 00:10:17.008 "zcopy": false, 00:10:17.008 "get_zone_info": false, 00:10:17.008 "zone_management": false, 00:10:17.008 "zone_append": false, 00:10:17.008 "compare": false, 00:10:17.008 "compare_and_write": false, 00:10:17.008 "abort": false, 00:10:17.008 "seek_hole": false, 00:10:17.008 "seek_data": false, 00:10:17.008 "copy": false, 00:10:17.008 "nvme_iov_md": false 00:10:17.008 }, 00:10:17.008 "memory_domains": [ 00:10:17.008 { 00:10:17.008 "dma_device_id": "system", 00:10:17.008 "dma_device_type": 1 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.008 "dma_device_type": 2 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "dma_device_id": "system", 00:10:17.008 "dma_device_type": 1 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.008 "dma_device_type": 2 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "dma_device_id": "system", 00:10:17.008 "dma_device_type": 1 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.008 "dma_device_type": 2 00:10:17.008 } 00:10:17.008 ], 00:10:17.008 "driver_specific": { 00:10:17.008 "raid": { 00:10:17.008 "uuid": "17d0c52e-00e5-4ccb-ab05-5cd885742246", 00:10:17.008 "strip_size_kb": 0, 00:10:17.008 "state": "online", 00:10:17.008 "raid_level": "raid1", 00:10:17.008 "superblock": false, 00:10:17.008 "num_base_bdevs": 3, 00:10:17.008 "num_base_bdevs_discovered": 3, 00:10:17.008 "num_base_bdevs_operational": 3, 00:10:17.008 "base_bdevs_list": [ 00:10:17.008 { 00:10:17.008 "name": "BaseBdev1", 00:10:17.008 "uuid": "0effaebc-f737-4345-ace1-bf355f854eeb", 00:10:17.008 "is_configured": true, 00:10:17.008 "data_offset": 0, 00:10:17.008 "data_size": 65536 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "name": "BaseBdev2", 00:10:17.008 "uuid": "7f170875-2822-48e3-8821-5be327934541", 00:10:17.008 "is_configured": true, 00:10:17.008 "data_offset": 0, 00:10:17.008 "data_size": 65536 00:10:17.008 }, 00:10:17.008 { 00:10:17.008 "name": "BaseBdev3", 00:10:17.008 "uuid": "1489e2ad-0227-49b7-b758-d8df825fcd95", 00:10:17.008 "is_configured": true, 00:10:17.008 "data_offset": 0, 00:10:17.008 "data_size": 65536 00:10:17.008 } 00:10:17.008 ] 00:10:17.008 } 00:10:17.008 } 00:10:17.008 }' 00:10:17.008 17:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.009 BaseBdev2 00:10:17.009 BaseBdev3' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.009 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.268 [2024-11-20 17:44:44.252964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.268 "name": "Existed_Raid", 00:10:17.268 "uuid": "17d0c52e-00e5-4ccb-ab05-5cd885742246", 00:10:17.268 "strip_size_kb": 0, 00:10:17.268 "state": "online", 00:10:17.268 "raid_level": "raid1", 00:10:17.268 "superblock": false, 00:10:17.268 "num_base_bdevs": 3, 00:10:17.268 "num_base_bdevs_discovered": 2, 00:10:17.268 "num_base_bdevs_operational": 2, 00:10:17.268 "base_bdevs_list": [ 00:10:17.268 { 00:10:17.268 "name": null, 00:10:17.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.268 "is_configured": false, 00:10:17.268 "data_offset": 0, 00:10:17.268 "data_size": 65536 00:10:17.268 }, 00:10:17.268 { 00:10:17.268 "name": "BaseBdev2", 00:10:17.268 "uuid": "7f170875-2822-48e3-8821-5be327934541", 00:10:17.268 "is_configured": true, 00:10:17.268 "data_offset": 0, 00:10:17.268 "data_size": 65536 00:10:17.268 }, 00:10:17.268 { 00:10:17.268 "name": "BaseBdev3", 00:10:17.268 "uuid": "1489e2ad-0227-49b7-b758-d8df825fcd95", 00:10:17.268 "is_configured": true, 00:10:17.268 "data_offset": 0, 00:10:17.268 "data_size": 65536 00:10:17.268 } 00:10:17.268 ] 00:10:17.268 }' 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.268 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.838 [2024-11-20 17:44:44.820895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.838 17:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.838 [2024-11-20 17:44:44.996538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.838 [2024-11-20 17:44:44.996677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.098 [2024-11-20 17:44:45.103267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.098 [2024-11-20 17:44:45.103337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.098 [2024-11-20 17:44:45.103352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.098 BaseBdev2 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.098 [ 00:10:18.098 { 00:10:18.098 "name": "BaseBdev2", 00:10:18.098 "aliases": [ 00:10:18.098 "dd827239-eeb9-4944-9944-7926efcbcfcf" 00:10:18.098 ], 00:10:18.098 "product_name": "Malloc disk", 00:10:18.098 "block_size": 512, 00:10:18.098 "num_blocks": 65536, 00:10:18.098 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:18.098 "assigned_rate_limits": { 00:10:18.098 "rw_ios_per_sec": 0, 00:10:18.098 "rw_mbytes_per_sec": 0, 00:10:18.098 "r_mbytes_per_sec": 0, 00:10:18.098 "w_mbytes_per_sec": 0 00:10:18.098 }, 00:10:18.098 "claimed": false, 00:10:18.098 "zoned": false, 00:10:18.098 "supported_io_types": { 00:10:18.098 "read": true, 00:10:18.098 "write": true, 00:10:18.098 "unmap": true, 00:10:18.098 "flush": true, 00:10:18.098 "reset": true, 00:10:18.098 "nvme_admin": false, 00:10:18.098 "nvme_io": false, 00:10:18.098 "nvme_io_md": false, 00:10:18.098 "write_zeroes": true, 00:10:18.098 "zcopy": true, 00:10:18.098 "get_zone_info": false, 00:10:18.098 "zone_management": false, 00:10:18.098 "zone_append": false, 00:10:18.098 "compare": false, 00:10:18.098 "compare_and_write": false, 00:10:18.098 "abort": true, 00:10:18.098 "seek_hole": false, 00:10:18.098 "seek_data": false, 00:10:18.098 "copy": true, 00:10:18.098 "nvme_iov_md": false 00:10:18.098 }, 00:10:18.098 "memory_domains": [ 00:10:18.098 { 00:10:18.098 "dma_device_id": "system", 00:10:18.098 "dma_device_type": 1 00:10:18.098 }, 00:10:18.098 { 00:10:18.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.098 "dma_device_type": 2 00:10:18.098 } 00:10:18.098 ], 00:10:18.098 "driver_specific": {} 00:10:18.098 } 00:10:18.098 ] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.098 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.099 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.099 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.099 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.359 BaseBdev3 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.359 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.359 [ 00:10:18.359 { 00:10:18.359 "name": "BaseBdev3", 00:10:18.359 "aliases": [ 00:10:18.359 "9325fdf4-0320-4921-a3cd-52d444548982" 00:10:18.359 ], 00:10:18.359 "product_name": "Malloc disk", 00:10:18.359 "block_size": 512, 00:10:18.359 "num_blocks": 65536, 00:10:18.359 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:18.359 "assigned_rate_limits": { 00:10:18.359 "rw_ios_per_sec": 0, 00:10:18.359 "rw_mbytes_per_sec": 0, 00:10:18.359 "r_mbytes_per_sec": 0, 00:10:18.359 "w_mbytes_per_sec": 0 00:10:18.359 }, 00:10:18.359 "claimed": false, 00:10:18.359 "zoned": false, 00:10:18.359 "supported_io_types": { 00:10:18.359 "read": true, 00:10:18.359 "write": true, 00:10:18.359 "unmap": true, 00:10:18.359 "flush": true, 00:10:18.359 "reset": true, 00:10:18.359 "nvme_admin": false, 00:10:18.359 "nvme_io": false, 00:10:18.359 "nvme_io_md": false, 00:10:18.359 "write_zeroes": true, 00:10:18.359 "zcopy": true, 00:10:18.359 "get_zone_info": false, 00:10:18.359 "zone_management": false, 00:10:18.359 "zone_append": false, 00:10:18.359 "compare": false, 00:10:18.359 "compare_and_write": false, 00:10:18.359 "abort": true, 00:10:18.359 "seek_hole": false, 00:10:18.359 "seek_data": false, 00:10:18.359 "copy": true, 00:10:18.359 "nvme_iov_md": false 00:10:18.359 }, 00:10:18.359 "memory_domains": [ 00:10:18.359 { 00:10:18.359 "dma_device_id": "system", 00:10:18.359 "dma_device_type": 1 00:10:18.359 }, 00:10:18.359 { 00:10:18.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.359 "dma_device_type": 2 00:10:18.359 } 00:10:18.360 ], 00:10:18.360 "driver_specific": {} 00:10:18.360 } 00:10:18.360 ] 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 [2024-11-20 17:44:45.341270] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.360 [2024-11-20 17:44:45.341340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.360 [2024-11-20 17:44:45.341367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.360 [2024-11-20 17:44:45.343570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.360 "name": "Existed_Raid", 00:10:18.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.360 "strip_size_kb": 0, 00:10:18.360 "state": "configuring", 00:10:18.360 "raid_level": "raid1", 00:10:18.360 "superblock": false, 00:10:18.360 "num_base_bdevs": 3, 00:10:18.360 "num_base_bdevs_discovered": 2, 00:10:18.360 "num_base_bdevs_operational": 3, 00:10:18.360 "base_bdevs_list": [ 00:10:18.360 { 00:10:18.360 "name": "BaseBdev1", 00:10:18.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.360 "is_configured": false, 00:10:18.360 "data_offset": 0, 00:10:18.360 "data_size": 0 00:10:18.360 }, 00:10:18.360 { 00:10:18.360 "name": "BaseBdev2", 00:10:18.360 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:18.360 "is_configured": true, 00:10:18.360 "data_offset": 0, 00:10:18.360 "data_size": 65536 00:10:18.360 }, 00:10:18.360 { 00:10:18.360 "name": "BaseBdev3", 00:10:18.360 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:18.360 "is_configured": true, 00:10:18.360 "data_offset": 0, 00:10:18.360 "data_size": 65536 00:10:18.360 } 00:10:18.360 ] 00:10:18.360 }' 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.360 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.624 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.624 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.624 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.624 [2024-11-20 17:44:45.764624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.624 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.624 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.624 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.625 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.885 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.885 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.885 "name": "Existed_Raid", 00:10:18.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.885 "strip_size_kb": 0, 00:10:18.885 "state": "configuring", 00:10:18.885 "raid_level": "raid1", 00:10:18.885 "superblock": false, 00:10:18.885 "num_base_bdevs": 3, 00:10:18.885 "num_base_bdevs_discovered": 1, 00:10:18.885 "num_base_bdevs_operational": 3, 00:10:18.885 "base_bdevs_list": [ 00:10:18.885 { 00:10:18.885 "name": "BaseBdev1", 00:10:18.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.885 "is_configured": false, 00:10:18.885 "data_offset": 0, 00:10:18.885 "data_size": 0 00:10:18.885 }, 00:10:18.885 { 00:10:18.885 "name": null, 00:10:18.885 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:18.885 "is_configured": false, 00:10:18.885 "data_offset": 0, 00:10:18.885 "data_size": 65536 00:10:18.885 }, 00:10:18.885 { 00:10:18.885 "name": "BaseBdev3", 00:10:18.885 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:18.885 "is_configured": true, 00:10:18.885 "data_offset": 0, 00:10:18.885 "data_size": 65536 00:10:18.885 } 00:10:18.885 ] 00:10:18.885 }' 00:10:18.885 17:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.885 17:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.146 [2024-11-20 17:44:46.278886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.146 BaseBdev1 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.146 [ 00:10:19.146 { 00:10:19.146 "name": "BaseBdev1", 00:10:19.146 "aliases": [ 00:10:19.146 "86cb78cf-f77a-42cb-930e-654f93859c3d" 00:10:19.146 ], 00:10:19.146 "product_name": "Malloc disk", 00:10:19.146 "block_size": 512, 00:10:19.146 "num_blocks": 65536, 00:10:19.146 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:19.146 "assigned_rate_limits": { 00:10:19.146 "rw_ios_per_sec": 0, 00:10:19.146 "rw_mbytes_per_sec": 0, 00:10:19.146 "r_mbytes_per_sec": 0, 00:10:19.146 "w_mbytes_per_sec": 0 00:10:19.146 }, 00:10:19.146 "claimed": true, 00:10:19.146 "claim_type": "exclusive_write", 00:10:19.146 "zoned": false, 00:10:19.146 "supported_io_types": { 00:10:19.146 "read": true, 00:10:19.146 "write": true, 00:10:19.146 "unmap": true, 00:10:19.146 "flush": true, 00:10:19.146 "reset": true, 00:10:19.146 "nvme_admin": false, 00:10:19.146 "nvme_io": false, 00:10:19.146 "nvme_io_md": false, 00:10:19.146 "write_zeroes": true, 00:10:19.146 "zcopy": true, 00:10:19.146 "get_zone_info": false, 00:10:19.146 "zone_management": false, 00:10:19.146 "zone_append": false, 00:10:19.146 "compare": false, 00:10:19.146 "compare_and_write": false, 00:10:19.146 "abort": true, 00:10:19.146 "seek_hole": false, 00:10:19.146 "seek_data": false, 00:10:19.146 "copy": true, 00:10:19.146 "nvme_iov_md": false 00:10:19.146 }, 00:10:19.146 "memory_domains": [ 00:10:19.146 { 00:10:19.146 "dma_device_id": "system", 00:10:19.146 "dma_device_type": 1 00:10:19.146 }, 00:10:19.146 { 00:10:19.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.146 "dma_device_type": 2 00:10:19.146 } 00:10:19.146 ], 00:10:19.146 "driver_specific": {} 00:10:19.146 } 00:10:19.146 ] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.146 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.420 "name": "Existed_Raid", 00:10:19.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.420 "strip_size_kb": 0, 00:10:19.420 "state": "configuring", 00:10:19.420 "raid_level": "raid1", 00:10:19.420 "superblock": false, 00:10:19.420 "num_base_bdevs": 3, 00:10:19.420 "num_base_bdevs_discovered": 2, 00:10:19.420 "num_base_bdevs_operational": 3, 00:10:19.420 "base_bdevs_list": [ 00:10:19.420 { 00:10:19.420 "name": "BaseBdev1", 00:10:19.420 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:19.420 "is_configured": true, 00:10:19.420 "data_offset": 0, 00:10:19.420 "data_size": 65536 00:10:19.420 }, 00:10:19.420 { 00:10:19.420 "name": null, 00:10:19.420 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:19.420 "is_configured": false, 00:10:19.420 "data_offset": 0, 00:10:19.420 "data_size": 65536 00:10:19.420 }, 00:10:19.420 { 00:10:19.420 "name": "BaseBdev3", 00:10:19.420 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:19.420 "is_configured": true, 00:10:19.420 "data_offset": 0, 00:10:19.420 "data_size": 65536 00:10:19.420 } 00:10:19.420 ] 00:10:19.420 }' 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.420 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 [2024-11-20 17:44:46.822044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.680 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.939 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.939 "name": "Existed_Raid", 00:10:19.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.939 "strip_size_kb": 0, 00:10:19.939 "state": "configuring", 00:10:19.939 "raid_level": "raid1", 00:10:19.939 "superblock": false, 00:10:19.939 "num_base_bdevs": 3, 00:10:19.939 "num_base_bdevs_discovered": 1, 00:10:19.939 "num_base_bdevs_operational": 3, 00:10:19.939 "base_bdevs_list": [ 00:10:19.939 { 00:10:19.939 "name": "BaseBdev1", 00:10:19.939 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:19.939 "is_configured": true, 00:10:19.939 "data_offset": 0, 00:10:19.939 "data_size": 65536 00:10:19.939 }, 00:10:19.939 { 00:10:19.939 "name": null, 00:10:19.939 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:19.939 "is_configured": false, 00:10:19.939 "data_offset": 0, 00:10:19.939 "data_size": 65536 00:10:19.939 }, 00:10:19.939 { 00:10:19.939 "name": null, 00:10:19.939 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:19.939 "is_configured": false, 00:10:19.939 "data_offset": 0, 00:10:19.939 "data_size": 65536 00:10:19.939 } 00:10:19.939 ] 00:10:19.939 }' 00:10:19.939 17:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.939 17:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 [2024-11-20 17:44:47.317225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.198 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.458 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.458 "name": "Existed_Raid", 00:10:20.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.458 "strip_size_kb": 0, 00:10:20.458 "state": "configuring", 00:10:20.458 "raid_level": "raid1", 00:10:20.458 "superblock": false, 00:10:20.458 "num_base_bdevs": 3, 00:10:20.458 "num_base_bdevs_discovered": 2, 00:10:20.458 "num_base_bdevs_operational": 3, 00:10:20.458 "base_bdevs_list": [ 00:10:20.458 { 00:10:20.458 "name": "BaseBdev1", 00:10:20.458 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:20.458 "is_configured": true, 00:10:20.458 "data_offset": 0, 00:10:20.458 "data_size": 65536 00:10:20.458 }, 00:10:20.458 { 00:10:20.458 "name": null, 00:10:20.458 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:20.458 "is_configured": false, 00:10:20.458 "data_offset": 0, 00:10:20.458 "data_size": 65536 00:10:20.458 }, 00:10:20.458 { 00:10:20.458 "name": "BaseBdev3", 00:10:20.458 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:20.458 "is_configured": true, 00:10:20.458 "data_offset": 0, 00:10:20.458 "data_size": 65536 00:10:20.458 } 00:10:20.458 ] 00:10:20.458 }' 00:10:20.458 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.458 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.718 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 [2024-11-20 17:44:47.836455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.978 17:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.978 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.978 "name": "Existed_Raid", 00:10:20.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.978 "strip_size_kb": 0, 00:10:20.978 "state": "configuring", 00:10:20.978 "raid_level": "raid1", 00:10:20.978 "superblock": false, 00:10:20.978 "num_base_bdevs": 3, 00:10:20.978 "num_base_bdevs_discovered": 1, 00:10:20.978 "num_base_bdevs_operational": 3, 00:10:20.978 "base_bdevs_list": [ 00:10:20.978 { 00:10:20.978 "name": null, 00:10:20.978 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:20.978 "is_configured": false, 00:10:20.978 "data_offset": 0, 00:10:20.978 "data_size": 65536 00:10:20.978 }, 00:10:20.978 { 00:10:20.978 "name": null, 00:10:20.978 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:20.978 "is_configured": false, 00:10:20.978 "data_offset": 0, 00:10:20.978 "data_size": 65536 00:10:20.978 }, 00:10:20.978 { 00:10:20.978 "name": "BaseBdev3", 00:10:20.978 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:20.978 "is_configured": true, 00:10:20.978 "data_offset": 0, 00:10:20.978 "data_size": 65536 00:10:20.978 } 00:10:20.978 ] 00:10:20.978 }' 00:10:20.978 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.978 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.238 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.238 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.238 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.238 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.498 [2024-11-20 17:44:48.446723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.498 "name": "Existed_Raid", 00:10:21.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.498 "strip_size_kb": 0, 00:10:21.498 "state": "configuring", 00:10:21.498 "raid_level": "raid1", 00:10:21.498 "superblock": false, 00:10:21.498 "num_base_bdevs": 3, 00:10:21.498 "num_base_bdevs_discovered": 2, 00:10:21.498 "num_base_bdevs_operational": 3, 00:10:21.498 "base_bdevs_list": [ 00:10:21.498 { 00:10:21.498 "name": null, 00:10:21.498 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:21.498 "is_configured": false, 00:10:21.498 "data_offset": 0, 00:10:21.498 "data_size": 65536 00:10:21.498 }, 00:10:21.498 { 00:10:21.498 "name": "BaseBdev2", 00:10:21.498 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:21.498 "is_configured": true, 00:10:21.498 "data_offset": 0, 00:10:21.498 "data_size": 65536 00:10:21.498 }, 00:10:21.498 { 00:10:21.498 "name": "BaseBdev3", 00:10:21.498 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:21.498 "is_configured": true, 00:10:21.498 "data_offset": 0, 00:10:21.498 "data_size": 65536 00:10:21.498 } 00:10:21.498 ] 00:10:21.498 }' 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.498 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.757 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.017 17:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 86cb78cf-f77a-42cb-930e-654f93859c3d 00:10:22.017 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.017 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.017 [2024-11-20 17:44:48.998793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:22.017 [2024-11-20 17:44:48.998870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.017 [2024-11-20 17:44:48.998880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:22.017 [2024-11-20 17:44:48.999218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:22.017 [2024-11-20 17:44:48.999415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.017 [2024-11-20 17:44:48.999440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:22.017 [2024-11-20 17:44:48.999762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.017 NewBaseBdev 00:10:22.017 17:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.017 [ 00:10:22.017 { 00:10:22.017 "name": "NewBaseBdev", 00:10:22.017 "aliases": [ 00:10:22.017 "86cb78cf-f77a-42cb-930e-654f93859c3d" 00:10:22.017 ], 00:10:22.017 "product_name": "Malloc disk", 00:10:22.017 "block_size": 512, 00:10:22.017 "num_blocks": 65536, 00:10:22.017 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:22.017 "assigned_rate_limits": { 00:10:22.017 "rw_ios_per_sec": 0, 00:10:22.017 "rw_mbytes_per_sec": 0, 00:10:22.017 "r_mbytes_per_sec": 0, 00:10:22.017 "w_mbytes_per_sec": 0 00:10:22.017 }, 00:10:22.017 "claimed": true, 00:10:22.017 "claim_type": "exclusive_write", 00:10:22.017 "zoned": false, 00:10:22.017 "supported_io_types": { 00:10:22.017 "read": true, 00:10:22.017 "write": true, 00:10:22.017 "unmap": true, 00:10:22.017 "flush": true, 00:10:22.017 "reset": true, 00:10:22.017 "nvme_admin": false, 00:10:22.017 "nvme_io": false, 00:10:22.017 "nvme_io_md": false, 00:10:22.017 "write_zeroes": true, 00:10:22.017 "zcopy": true, 00:10:22.017 "get_zone_info": false, 00:10:22.017 "zone_management": false, 00:10:22.017 "zone_append": false, 00:10:22.017 "compare": false, 00:10:22.017 "compare_and_write": false, 00:10:22.017 "abort": true, 00:10:22.017 "seek_hole": false, 00:10:22.017 "seek_data": false, 00:10:22.017 "copy": true, 00:10:22.017 "nvme_iov_md": false 00:10:22.017 }, 00:10:22.017 "memory_domains": [ 00:10:22.017 { 00:10:22.017 "dma_device_id": "system", 00:10:22.017 "dma_device_type": 1 00:10:22.017 }, 00:10:22.017 { 00:10:22.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.017 "dma_device_type": 2 00:10:22.017 } 00:10:22.017 ], 00:10:22.017 "driver_specific": {} 00:10:22.017 } 00:10:22.017 ] 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.017 "name": "Existed_Raid", 00:10:22.017 "uuid": "e67d76af-f0bf-4fed-b5bd-8d6d1b18d573", 00:10:22.017 "strip_size_kb": 0, 00:10:22.017 "state": "online", 00:10:22.017 "raid_level": "raid1", 00:10:22.017 "superblock": false, 00:10:22.017 "num_base_bdevs": 3, 00:10:22.017 "num_base_bdevs_discovered": 3, 00:10:22.017 "num_base_bdevs_operational": 3, 00:10:22.017 "base_bdevs_list": [ 00:10:22.017 { 00:10:22.017 "name": "NewBaseBdev", 00:10:22.017 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:22.017 "is_configured": true, 00:10:22.017 "data_offset": 0, 00:10:22.017 "data_size": 65536 00:10:22.017 }, 00:10:22.017 { 00:10:22.017 "name": "BaseBdev2", 00:10:22.017 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:22.017 "is_configured": true, 00:10:22.017 "data_offset": 0, 00:10:22.017 "data_size": 65536 00:10:22.017 }, 00:10:22.017 { 00:10:22.017 "name": "BaseBdev3", 00:10:22.017 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:22.017 "is_configured": true, 00:10:22.017 "data_offset": 0, 00:10:22.017 "data_size": 65536 00:10:22.017 } 00:10:22.017 ] 00:10:22.017 }' 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.017 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.585 [2024-11-20 17:44:49.486379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.585 "name": "Existed_Raid", 00:10:22.585 "aliases": [ 00:10:22.585 "e67d76af-f0bf-4fed-b5bd-8d6d1b18d573" 00:10:22.585 ], 00:10:22.585 "product_name": "Raid Volume", 00:10:22.585 "block_size": 512, 00:10:22.585 "num_blocks": 65536, 00:10:22.585 "uuid": "e67d76af-f0bf-4fed-b5bd-8d6d1b18d573", 00:10:22.585 "assigned_rate_limits": { 00:10:22.585 "rw_ios_per_sec": 0, 00:10:22.585 "rw_mbytes_per_sec": 0, 00:10:22.585 "r_mbytes_per_sec": 0, 00:10:22.585 "w_mbytes_per_sec": 0 00:10:22.585 }, 00:10:22.585 "claimed": false, 00:10:22.585 "zoned": false, 00:10:22.585 "supported_io_types": { 00:10:22.585 "read": true, 00:10:22.585 "write": true, 00:10:22.585 "unmap": false, 00:10:22.585 "flush": false, 00:10:22.585 "reset": true, 00:10:22.585 "nvme_admin": false, 00:10:22.585 "nvme_io": false, 00:10:22.585 "nvme_io_md": false, 00:10:22.585 "write_zeroes": true, 00:10:22.585 "zcopy": false, 00:10:22.585 "get_zone_info": false, 00:10:22.585 "zone_management": false, 00:10:22.585 "zone_append": false, 00:10:22.585 "compare": false, 00:10:22.585 "compare_and_write": false, 00:10:22.585 "abort": false, 00:10:22.585 "seek_hole": false, 00:10:22.585 "seek_data": false, 00:10:22.585 "copy": false, 00:10:22.585 "nvme_iov_md": false 00:10:22.585 }, 00:10:22.585 "memory_domains": [ 00:10:22.585 { 00:10:22.585 "dma_device_id": "system", 00:10:22.585 "dma_device_type": 1 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.585 "dma_device_type": 2 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "dma_device_id": "system", 00:10:22.585 "dma_device_type": 1 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.585 "dma_device_type": 2 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "dma_device_id": "system", 00:10:22.585 "dma_device_type": 1 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.585 "dma_device_type": 2 00:10:22.585 } 00:10:22.585 ], 00:10:22.585 "driver_specific": { 00:10:22.585 "raid": { 00:10:22.585 "uuid": "e67d76af-f0bf-4fed-b5bd-8d6d1b18d573", 00:10:22.585 "strip_size_kb": 0, 00:10:22.585 "state": "online", 00:10:22.585 "raid_level": "raid1", 00:10:22.585 "superblock": false, 00:10:22.585 "num_base_bdevs": 3, 00:10:22.585 "num_base_bdevs_discovered": 3, 00:10:22.585 "num_base_bdevs_operational": 3, 00:10:22.585 "base_bdevs_list": [ 00:10:22.585 { 00:10:22.585 "name": "NewBaseBdev", 00:10:22.585 "uuid": "86cb78cf-f77a-42cb-930e-654f93859c3d", 00:10:22.585 "is_configured": true, 00:10:22.585 "data_offset": 0, 00:10:22.585 "data_size": 65536 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "name": "BaseBdev2", 00:10:22.585 "uuid": "dd827239-eeb9-4944-9944-7926efcbcfcf", 00:10:22.585 "is_configured": true, 00:10:22.585 "data_offset": 0, 00:10:22.585 "data_size": 65536 00:10:22.585 }, 00:10:22.585 { 00:10:22.585 "name": "BaseBdev3", 00:10:22.585 "uuid": "9325fdf4-0320-4921-a3cd-52d444548982", 00:10:22.585 "is_configured": true, 00:10:22.585 "data_offset": 0, 00:10:22.585 "data_size": 65536 00:10:22.585 } 00:10:22.585 ] 00:10:22.585 } 00:10:22.585 } 00:10:22.585 }' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.585 BaseBdev2 00:10:22.585 BaseBdev3' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.585 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.586 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.586 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.586 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.846 [2024-11-20 17:44:49.765573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.846 [2024-11-20 17:44:49.765714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.846 [2024-11-20 17:44:49.765833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.846 [2024-11-20 17:44:49.766180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.846 [2024-11-20 17:44:49.766194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67793 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67793 ']' 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67793 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67793 00:10:22.846 killing process with pid 67793 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67793' 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67793 00:10:22.846 [2024-11-20 17:44:49.810272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.846 17:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67793 00:10:23.106 [2024-11-20 17:44:50.153866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.494 00:10:24.494 real 0m10.944s 00:10:24.494 user 0m17.103s 00:10:24.494 sys 0m1.950s 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.494 ************************************ 00:10:24.494 END TEST raid_state_function_test 00:10:24.494 ************************************ 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.494 17:44:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:24.494 17:44:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.494 17:44:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.494 17:44:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.494 ************************************ 00:10:24.494 START TEST raid_state_function_test_sb 00:10:24.494 ************************************ 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68420 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68420' 00:10:24.494 Process raid pid: 68420 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68420 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68420 ']' 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.494 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.495 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.495 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.495 17:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.495 [2024-11-20 17:44:51.642183] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:24.495 [2024-11-20 17:44:51.642305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.762 [2024-11-20 17:44:51.818625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.029 [2024-11-20 17:44:51.953628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.029 [2024-11-20 17:44:52.195595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.029 [2024-11-20 17:44:52.195638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.598 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.598 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:25.598 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.598 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.598 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.598 [2024-11-20 17:44:52.510090] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.598 [2024-11-20 17:44:52.510164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.599 [2024-11-20 17:44:52.510183] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.599 [2024-11-20 17:44:52.510194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.599 [2024-11-20 17:44:52.510201] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.599 [2024-11-20 17:44:52.510210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.599 "name": "Existed_Raid", 00:10:25.599 "uuid": "4ffde515-c527-42cd-9b57-3788c17211e9", 00:10:25.599 "strip_size_kb": 0, 00:10:25.599 "state": "configuring", 00:10:25.599 "raid_level": "raid1", 00:10:25.599 "superblock": true, 00:10:25.599 "num_base_bdevs": 3, 00:10:25.599 "num_base_bdevs_discovered": 0, 00:10:25.599 "num_base_bdevs_operational": 3, 00:10:25.599 "base_bdevs_list": [ 00:10:25.599 { 00:10:25.599 "name": "BaseBdev1", 00:10:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.599 "is_configured": false, 00:10:25.599 "data_offset": 0, 00:10:25.599 "data_size": 0 00:10:25.599 }, 00:10:25.599 { 00:10:25.599 "name": "BaseBdev2", 00:10:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.599 "is_configured": false, 00:10:25.599 "data_offset": 0, 00:10:25.599 "data_size": 0 00:10:25.599 }, 00:10:25.599 { 00:10:25.599 "name": "BaseBdev3", 00:10:25.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.599 "is_configured": false, 00:10:25.599 "data_offset": 0, 00:10:25.599 "data_size": 0 00:10:25.599 } 00:10:25.599 ] 00:10:25.599 }' 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.599 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 [2024-11-20 17:44:52.929293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.859 [2024-11-20 17:44:52.929352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 [2024-11-20 17:44:52.941251] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.859 [2024-11-20 17:44:52.941303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.859 [2024-11-20 17:44:52.941313] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.859 [2024-11-20 17:44:52.941323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.859 [2024-11-20 17:44:52.941331] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.859 [2024-11-20 17:44:52.941341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 [2024-11-20 17:44:52.995679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.859 BaseBdev1 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.859 17:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.859 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.859 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.859 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 [ 00:10:25.859 { 00:10:25.859 "name": "BaseBdev1", 00:10:25.859 "aliases": [ 00:10:25.859 "ed4c1010-7dea-457d-be8c-2691481b854e" 00:10:25.859 ], 00:10:25.859 "product_name": "Malloc disk", 00:10:25.859 "block_size": 512, 00:10:25.859 "num_blocks": 65536, 00:10:25.859 "uuid": "ed4c1010-7dea-457d-be8c-2691481b854e", 00:10:25.859 "assigned_rate_limits": { 00:10:25.859 "rw_ios_per_sec": 0, 00:10:25.859 "rw_mbytes_per_sec": 0, 00:10:25.859 "r_mbytes_per_sec": 0, 00:10:25.859 "w_mbytes_per_sec": 0 00:10:25.859 }, 00:10:25.859 "claimed": true, 00:10:25.859 "claim_type": "exclusive_write", 00:10:25.859 "zoned": false, 00:10:25.859 "supported_io_types": { 00:10:25.859 "read": true, 00:10:25.859 "write": true, 00:10:25.859 "unmap": true, 00:10:25.859 "flush": true, 00:10:25.859 "reset": true, 00:10:25.859 "nvme_admin": false, 00:10:25.859 "nvme_io": false, 00:10:25.859 "nvme_io_md": false, 00:10:25.859 "write_zeroes": true, 00:10:25.860 "zcopy": true, 00:10:25.860 "get_zone_info": false, 00:10:25.860 "zone_management": false, 00:10:25.860 "zone_append": false, 00:10:25.860 "compare": false, 00:10:25.860 "compare_and_write": false, 00:10:25.860 "abort": true, 00:10:25.860 "seek_hole": false, 00:10:25.860 "seek_data": false, 00:10:25.860 "copy": true, 00:10:25.860 "nvme_iov_md": false 00:10:25.860 }, 00:10:25.860 "memory_domains": [ 00:10:25.860 { 00:10:25.860 "dma_device_id": "system", 00:10:25.860 "dma_device_type": 1 00:10:25.860 }, 00:10:25.860 { 00:10:25.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.860 "dma_device_type": 2 00:10:25.860 } 00:10:25.860 ], 00:10:25.860 "driver_specific": {} 00:10:25.860 } 00:10:25.860 ] 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.860 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.120 "name": "Existed_Raid", 00:10:26.120 "uuid": "6f545c3d-1677-4a2b-9230-443e5ba5c215", 00:10:26.120 "strip_size_kb": 0, 00:10:26.120 "state": "configuring", 00:10:26.120 "raid_level": "raid1", 00:10:26.120 "superblock": true, 00:10:26.120 "num_base_bdevs": 3, 00:10:26.120 "num_base_bdevs_discovered": 1, 00:10:26.120 "num_base_bdevs_operational": 3, 00:10:26.120 "base_bdevs_list": [ 00:10:26.120 { 00:10:26.120 "name": "BaseBdev1", 00:10:26.120 "uuid": "ed4c1010-7dea-457d-be8c-2691481b854e", 00:10:26.120 "is_configured": true, 00:10:26.120 "data_offset": 2048, 00:10:26.120 "data_size": 63488 00:10:26.120 }, 00:10:26.120 { 00:10:26.120 "name": "BaseBdev2", 00:10:26.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.120 "is_configured": false, 00:10:26.120 "data_offset": 0, 00:10:26.120 "data_size": 0 00:10:26.120 }, 00:10:26.120 { 00:10:26.120 "name": "BaseBdev3", 00:10:26.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.120 "is_configured": false, 00:10:26.120 "data_offset": 0, 00:10:26.120 "data_size": 0 00:10:26.120 } 00:10:26.120 ] 00:10:26.120 }' 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.120 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.381 [2024-11-20 17:44:53.482961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.381 [2024-11-20 17:44:53.483061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.381 [2024-11-20 17:44:53.494961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.381 [2024-11-20 17:44:53.497285] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.381 [2024-11-20 17:44:53.497338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.381 [2024-11-20 17:44:53.497352] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.381 [2024-11-20 17:44:53.497363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.381 "name": "Existed_Raid", 00:10:26.381 "uuid": "de674c71-7179-4ed7-bb98-47698e279d38", 00:10:26.381 "strip_size_kb": 0, 00:10:26.381 "state": "configuring", 00:10:26.381 "raid_level": "raid1", 00:10:26.381 "superblock": true, 00:10:26.381 "num_base_bdevs": 3, 00:10:26.381 "num_base_bdevs_discovered": 1, 00:10:26.381 "num_base_bdevs_operational": 3, 00:10:26.381 "base_bdevs_list": [ 00:10:26.381 { 00:10:26.381 "name": "BaseBdev1", 00:10:26.381 "uuid": "ed4c1010-7dea-457d-be8c-2691481b854e", 00:10:26.381 "is_configured": true, 00:10:26.381 "data_offset": 2048, 00:10:26.381 "data_size": 63488 00:10:26.381 }, 00:10:26.381 { 00:10:26.381 "name": "BaseBdev2", 00:10:26.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.381 "is_configured": false, 00:10:26.381 "data_offset": 0, 00:10:26.381 "data_size": 0 00:10:26.381 }, 00:10:26.381 { 00:10:26.381 "name": "BaseBdev3", 00:10:26.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.381 "is_configured": false, 00:10:26.381 "data_offset": 0, 00:10:26.381 "data_size": 0 00:10:26.381 } 00:10:26.381 ] 00:10:26.381 }' 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.381 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.952 17:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.952 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.952 17:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.952 [2024-11-20 17:44:54.004378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.952 BaseBdev2 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.952 [ 00:10:26.952 { 00:10:26.952 "name": "BaseBdev2", 00:10:26.952 "aliases": [ 00:10:26.952 "ecc8a449-726c-44b0-a071-c1238e029b4f" 00:10:26.952 ], 00:10:26.952 "product_name": "Malloc disk", 00:10:26.952 "block_size": 512, 00:10:26.952 "num_blocks": 65536, 00:10:26.952 "uuid": "ecc8a449-726c-44b0-a071-c1238e029b4f", 00:10:26.952 "assigned_rate_limits": { 00:10:26.952 "rw_ios_per_sec": 0, 00:10:26.952 "rw_mbytes_per_sec": 0, 00:10:26.952 "r_mbytes_per_sec": 0, 00:10:26.952 "w_mbytes_per_sec": 0 00:10:26.952 }, 00:10:26.952 "claimed": true, 00:10:26.952 "claim_type": "exclusive_write", 00:10:26.952 "zoned": false, 00:10:26.952 "supported_io_types": { 00:10:26.952 "read": true, 00:10:26.952 "write": true, 00:10:26.952 "unmap": true, 00:10:26.952 "flush": true, 00:10:26.952 "reset": true, 00:10:26.952 "nvme_admin": false, 00:10:26.952 "nvme_io": false, 00:10:26.952 "nvme_io_md": false, 00:10:26.952 "write_zeroes": true, 00:10:26.952 "zcopy": true, 00:10:26.952 "get_zone_info": false, 00:10:26.952 "zone_management": false, 00:10:26.952 "zone_append": false, 00:10:26.952 "compare": false, 00:10:26.952 "compare_and_write": false, 00:10:26.952 "abort": true, 00:10:26.952 "seek_hole": false, 00:10:26.952 "seek_data": false, 00:10:26.952 "copy": true, 00:10:26.952 "nvme_iov_md": false 00:10:26.952 }, 00:10:26.952 "memory_domains": [ 00:10:26.952 { 00:10:26.952 "dma_device_id": "system", 00:10:26.952 "dma_device_type": 1 00:10:26.952 }, 00:10:26.952 { 00:10:26.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.952 "dma_device_type": 2 00:10:26.952 } 00:10:26.952 ], 00:10:26.952 "driver_specific": {} 00:10:26.952 } 00:10:26.952 ] 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.952 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.953 "name": "Existed_Raid", 00:10:26.953 "uuid": "de674c71-7179-4ed7-bb98-47698e279d38", 00:10:26.953 "strip_size_kb": 0, 00:10:26.953 "state": "configuring", 00:10:26.953 "raid_level": "raid1", 00:10:26.953 "superblock": true, 00:10:26.953 "num_base_bdevs": 3, 00:10:26.953 "num_base_bdevs_discovered": 2, 00:10:26.953 "num_base_bdevs_operational": 3, 00:10:26.953 "base_bdevs_list": [ 00:10:26.953 { 00:10:26.953 "name": "BaseBdev1", 00:10:26.953 "uuid": "ed4c1010-7dea-457d-be8c-2691481b854e", 00:10:26.953 "is_configured": true, 00:10:26.953 "data_offset": 2048, 00:10:26.953 "data_size": 63488 00:10:26.953 }, 00:10:26.953 { 00:10:26.953 "name": "BaseBdev2", 00:10:26.953 "uuid": "ecc8a449-726c-44b0-a071-c1238e029b4f", 00:10:26.953 "is_configured": true, 00:10:26.953 "data_offset": 2048, 00:10:26.953 "data_size": 63488 00:10:26.953 }, 00:10:26.953 { 00:10:26.953 "name": "BaseBdev3", 00:10:26.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.953 "is_configured": false, 00:10:26.953 "data_offset": 0, 00:10:26.953 "data_size": 0 00:10:26.953 } 00:10:26.953 ] 00:10:26.953 }' 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.953 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.523 [2024-11-20 17:44:54.526354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.523 [2024-11-20 17:44:54.526690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.523 [2024-11-20 17:44:54.526716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.523 [2024-11-20 17:44:54.527249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:27.523 [2024-11-20 17:44:54.527430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.523 [2024-11-20 17:44:54.527446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.523 BaseBdev3 00:10:27.523 [2024-11-20 17:44:54.527623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.523 [ 00:10:27.523 { 00:10:27.523 "name": "BaseBdev3", 00:10:27.523 "aliases": [ 00:10:27.523 "2543efb9-0dc0-4d1d-a2ac-ad9e6fded993" 00:10:27.523 ], 00:10:27.523 "product_name": "Malloc disk", 00:10:27.523 "block_size": 512, 00:10:27.523 "num_blocks": 65536, 00:10:27.523 "uuid": "2543efb9-0dc0-4d1d-a2ac-ad9e6fded993", 00:10:27.523 "assigned_rate_limits": { 00:10:27.523 "rw_ios_per_sec": 0, 00:10:27.523 "rw_mbytes_per_sec": 0, 00:10:27.523 "r_mbytes_per_sec": 0, 00:10:27.523 "w_mbytes_per_sec": 0 00:10:27.523 }, 00:10:27.523 "claimed": true, 00:10:27.523 "claim_type": "exclusive_write", 00:10:27.523 "zoned": false, 00:10:27.523 "supported_io_types": { 00:10:27.523 "read": true, 00:10:27.523 "write": true, 00:10:27.523 "unmap": true, 00:10:27.523 "flush": true, 00:10:27.523 "reset": true, 00:10:27.523 "nvme_admin": false, 00:10:27.523 "nvme_io": false, 00:10:27.523 "nvme_io_md": false, 00:10:27.523 "write_zeroes": true, 00:10:27.523 "zcopy": true, 00:10:27.523 "get_zone_info": false, 00:10:27.523 "zone_management": false, 00:10:27.523 "zone_append": false, 00:10:27.523 "compare": false, 00:10:27.523 "compare_and_write": false, 00:10:27.523 "abort": true, 00:10:27.523 "seek_hole": false, 00:10:27.523 "seek_data": false, 00:10:27.523 "copy": true, 00:10:27.523 "nvme_iov_md": false 00:10:27.523 }, 00:10:27.523 "memory_domains": [ 00:10:27.523 { 00:10:27.523 "dma_device_id": "system", 00:10:27.523 "dma_device_type": 1 00:10:27.523 }, 00:10:27.523 { 00:10:27.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.523 "dma_device_type": 2 00:10:27.523 } 00:10:27.523 ], 00:10:27.523 "driver_specific": {} 00:10:27.523 } 00:10:27.523 ] 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.523 "name": "Existed_Raid", 00:10:27.523 "uuid": "de674c71-7179-4ed7-bb98-47698e279d38", 00:10:27.523 "strip_size_kb": 0, 00:10:27.523 "state": "online", 00:10:27.523 "raid_level": "raid1", 00:10:27.523 "superblock": true, 00:10:27.523 "num_base_bdevs": 3, 00:10:27.523 "num_base_bdevs_discovered": 3, 00:10:27.523 "num_base_bdevs_operational": 3, 00:10:27.523 "base_bdevs_list": [ 00:10:27.523 { 00:10:27.523 "name": "BaseBdev1", 00:10:27.523 "uuid": "ed4c1010-7dea-457d-be8c-2691481b854e", 00:10:27.523 "is_configured": true, 00:10:27.523 "data_offset": 2048, 00:10:27.523 "data_size": 63488 00:10:27.523 }, 00:10:27.523 { 00:10:27.523 "name": "BaseBdev2", 00:10:27.523 "uuid": "ecc8a449-726c-44b0-a071-c1238e029b4f", 00:10:27.523 "is_configured": true, 00:10:27.523 "data_offset": 2048, 00:10:27.523 "data_size": 63488 00:10:27.523 }, 00:10:27.523 { 00:10:27.523 "name": "BaseBdev3", 00:10:27.523 "uuid": "2543efb9-0dc0-4d1d-a2ac-ad9e6fded993", 00:10:27.523 "is_configured": true, 00:10:27.523 "data_offset": 2048, 00:10:27.523 "data_size": 63488 00:10:27.523 } 00:10:27.523 ] 00:10:27.523 }' 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.523 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.093 17:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.093 [2024-11-20 17:44:54.993984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.093 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.093 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.093 "name": "Existed_Raid", 00:10:28.093 "aliases": [ 00:10:28.093 "de674c71-7179-4ed7-bb98-47698e279d38" 00:10:28.093 ], 00:10:28.093 "product_name": "Raid Volume", 00:10:28.093 "block_size": 512, 00:10:28.093 "num_blocks": 63488, 00:10:28.093 "uuid": "de674c71-7179-4ed7-bb98-47698e279d38", 00:10:28.094 "assigned_rate_limits": { 00:10:28.094 "rw_ios_per_sec": 0, 00:10:28.094 "rw_mbytes_per_sec": 0, 00:10:28.094 "r_mbytes_per_sec": 0, 00:10:28.094 "w_mbytes_per_sec": 0 00:10:28.094 }, 00:10:28.094 "claimed": false, 00:10:28.094 "zoned": false, 00:10:28.094 "supported_io_types": { 00:10:28.094 "read": true, 00:10:28.094 "write": true, 00:10:28.094 "unmap": false, 00:10:28.094 "flush": false, 00:10:28.094 "reset": true, 00:10:28.094 "nvme_admin": false, 00:10:28.094 "nvme_io": false, 00:10:28.094 "nvme_io_md": false, 00:10:28.094 "write_zeroes": true, 00:10:28.094 "zcopy": false, 00:10:28.094 "get_zone_info": false, 00:10:28.094 "zone_management": false, 00:10:28.094 "zone_append": false, 00:10:28.094 "compare": false, 00:10:28.094 "compare_and_write": false, 00:10:28.094 "abort": false, 00:10:28.094 "seek_hole": false, 00:10:28.094 "seek_data": false, 00:10:28.094 "copy": false, 00:10:28.094 "nvme_iov_md": false 00:10:28.094 }, 00:10:28.094 "memory_domains": [ 00:10:28.094 { 00:10:28.094 "dma_device_id": "system", 00:10:28.094 "dma_device_type": 1 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.094 "dma_device_type": 2 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "dma_device_id": "system", 00:10:28.094 "dma_device_type": 1 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.094 "dma_device_type": 2 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "dma_device_id": "system", 00:10:28.094 "dma_device_type": 1 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.094 "dma_device_type": 2 00:10:28.094 } 00:10:28.094 ], 00:10:28.094 "driver_specific": { 00:10:28.094 "raid": { 00:10:28.094 "uuid": "de674c71-7179-4ed7-bb98-47698e279d38", 00:10:28.094 "strip_size_kb": 0, 00:10:28.094 "state": "online", 00:10:28.094 "raid_level": "raid1", 00:10:28.094 "superblock": true, 00:10:28.094 "num_base_bdevs": 3, 00:10:28.094 "num_base_bdevs_discovered": 3, 00:10:28.094 "num_base_bdevs_operational": 3, 00:10:28.094 "base_bdevs_list": [ 00:10:28.094 { 00:10:28.094 "name": "BaseBdev1", 00:10:28.094 "uuid": "ed4c1010-7dea-457d-be8c-2691481b854e", 00:10:28.094 "is_configured": true, 00:10:28.094 "data_offset": 2048, 00:10:28.094 "data_size": 63488 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "name": "BaseBdev2", 00:10:28.094 "uuid": "ecc8a449-726c-44b0-a071-c1238e029b4f", 00:10:28.094 "is_configured": true, 00:10:28.094 "data_offset": 2048, 00:10:28.094 "data_size": 63488 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "name": "BaseBdev3", 00:10:28.094 "uuid": "2543efb9-0dc0-4d1d-a2ac-ad9e6fded993", 00:10:28.094 "is_configured": true, 00:10:28.094 "data_offset": 2048, 00:10:28.094 "data_size": 63488 00:10:28.094 } 00:10:28.094 ] 00:10:28.094 } 00:10:28.094 } 00:10:28.094 }' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.094 BaseBdev2 00:10:28.094 BaseBdev3' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 [2024-11-20 17:44:55.265239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.374 "name": "Existed_Raid", 00:10:28.374 "uuid": "de674c71-7179-4ed7-bb98-47698e279d38", 00:10:28.374 "strip_size_kb": 0, 00:10:28.374 "state": "online", 00:10:28.374 "raid_level": "raid1", 00:10:28.374 "superblock": true, 00:10:28.374 "num_base_bdevs": 3, 00:10:28.374 "num_base_bdevs_discovered": 2, 00:10:28.374 "num_base_bdevs_operational": 2, 00:10:28.374 "base_bdevs_list": [ 00:10:28.374 { 00:10:28.374 "name": null, 00:10:28.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.374 "is_configured": false, 00:10:28.374 "data_offset": 0, 00:10:28.374 "data_size": 63488 00:10:28.374 }, 00:10:28.374 { 00:10:28.374 "name": "BaseBdev2", 00:10:28.374 "uuid": "ecc8a449-726c-44b0-a071-c1238e029b4f", 00:10:28.374 "is_configured": true, 00:10:28.374 "data_offset": 2048, 00:10:28.374 "data_size": 63488 00:10:28.374 }, 00:10:28.374 { 00:10:28.374 "name": "BaseBdev3", 00:10:28.374 "uuid": "2543efb9-0dc0-4d1d-a2ac-ad9e6fded993", 00:10:28.374 "is_configured": true, 00:10:28.374 "data_offset": 2048, 00:10:28.374 "data_size": 63488 00:10:28.374 } 00:10:28.374 ] 00:10:28.374 }' 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.374 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 [2024-11-20 17:44:55.857453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 17:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.953 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.953 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.953 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.953 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.953 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 [2024-11-20 17:44:56.030203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.953 [2024-11-20 17:44:56.030457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.213 [2024-11-20 17:44:56.145134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.213 [2024-11-20 17:44:56.145326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.213 [2024-11-20 17:44:56.145372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.213 BaseBdev2 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.213 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.213 [ 00:10:29.213 { 00:10:29.213 "name": "BaseBdev2", 00:10:29.213 "aliases": [ 00:10:29.214 "da509818-ddde-4434-a05a-3269de3fbdc1" 00:10:29.214 ], 00:10:29.214 "product_name": "Malloc disk", 00:10:29.214 "block_size": 512, 00:10:29.214 "num_blocks": 65536, 00:10:29.214 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:29.214 "assigned_rate_limits": { 00:10:29.214 "rw_ios_per_sec": 0, 00:10:29.214 "rw_mbytes_per_sec": 0, 00:10:29.214 "r_mbytes_per_sec": 0, 00:10:29.214 "w_mbytes_per_sec": 0 00:10:29.214 }, 00:10:29.214 "claimed": false, 00:10:29.214 "zoned": false, 00:10:29.214 "supported_io_types": { 00:10:29.214 "read": true, 00:10:29.214 "write": true, 00:10:29.214 "unmap": true, 00:10:29.214 "flush": true, 00:10:29.214 "reset": true, 00:10:29.214 "nvme_admin": false, 00:10:29.214 "nvme_io": false, 00:10:29.214 "nvme_io_md": false, 00:10:29.214 "write_zeroes": true, 00:10:29.214 "zcopy": true, 00:10:29.214 "get_zone_info": false, 00:10:29.214 "zone_management": false, 00:10:29.214 "zone_append": false, 00:10:29.214 "compare": false, 00:10:29.214 "compare_and_write": false, 00:10:29.214 "abort": true, 00:10:29.214 "seek_hole": false, 00:10:29.214 "seek_data": false, 00:10:29.214 "copy": true, 00:10:29.214 "nvme_iov_md": false 00:10:29.214 }, 00:10:29.214 "memory_domains": [ 00:10:29.214 { 00:10:29.214 "dma_device_id": "system", 00:10:29.214 "dma_device_type": 1 00:10:29.214 }, 00:10:29.214 { 00:10:29.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.214 "dma_device_type": 2 00:10:29.214 } 00:10:29.214 ], 00:10:29.214 "driver_specific": {} 00:10:29.214 } 00:10:29.214 ] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.214 BaseBdev3 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.214 [ 00:10:29.214 { 00:10:29.214 "name": "BaseBdev3", 00:10:29.214 "aliases": [ 00:10:29.214 "5b89fa53-ff92-4a69-94a4-eee4966a28a5" 00:10:29.214 ], 00:10:29.214 "product_name": "Malloc disk", 00:10:29.214 "block_size": 512, 00:10:29.214 "num_blocks": 65536, 00:10:29.214 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:29.214 "assigned_rate_limits": { 00:10:29.214 "rw_ios_per_sec": 0, 00:10:29.214 "rw_mbytes_per_sec": 0, 00:10:29.214 "r_mbytes_per_sec": 0, 00:10:29.214 "w_mbytes_per_sec": 0 00:10:29.214 }, 00:10:29.214 "claimed": false, 00:10:29.214 "zoned": false, 00:10:29.214 "supported_io_types": { 00:10:29.214 "read": true, 00:10:29.214 "write": true, 00:10:29.214 "unmap": true, 00:10:29.214 "flush": true, 00:10:29.214 "reset": true, 00:10:29.214 "nvme_admin": false, 00:10:29.214 "nvme_io": false, 00:10:29.214 "nvme_io_md": false, 00:10:29.214 "write_zeroes": true, 00:10:29.214 "zcopy": true, 00:10:29.214 "get_zone_info": false, 00:10:29.214 "zone_management": false, 00:10:29.214 "zone_append": false, 00:10:29.214 "compare": false, 00:10:29.214 "compare_and_write": false, 00:10:29.214 "abort": true, 00:10:29.214 "seek_hole": false, 00:10:29.214 "seek_data": false, 00:10:29.214 "copy": true, 00:10:29.214 "nvme_iov_md": false 00:10:29.214 }, 00:10:29.214 "memory_domains": [ 00:10:29.214 { 00:10:29.214 "dma_device_id": "system", 00:10:29.214 "dma_device_type": 1 00:10:29.214 }, 00:10:29.214 { 00:10:29.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.214 "dma_device_type": 2 00:10:29.214 } 00:10:29.214 ], 00:10:29.214 "driver_specific": {} 00:10:29.214 } 00:10:29.214 ] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.214 [2024-11-20 17:44:56.382428] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.214 [2024-11-20 17:44:56.382557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.214 [2024-11-20 17:44:56.382612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.214 [2024-11-20 17:44:56.384796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.214 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.475 "name": "Existed_Raid", 00:10:29.475 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:29.475 "strip_size_kb": 0, 00:10:29.475 "state": "configuring", 00:10:29.475 "raid_level": "raid1", 00:10:29.475 "superblock": true, 00:10:29.475 "num_base_bdevs": 3, 00:10:29.475 "num_base_bdevs_discovered": 2, 00:10:29.475 "num_base_bdevs_operational": 3, 00:10:29.475 "base_bdevs_list": [ 00:10:29.475 { 00:10:29.475 "name": "BaseBdev1", 00:10:29.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.475 "is_configured": false, 00:10:29.475 "data_offset": 0, 00:10:29.475 "data_size": 0 00:10:29.475 }, 00:10:29.475 { 00:10:29.475 "name": "BaseBdev2", 00:10:29.475 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:29.475 "is_configured": true, 00:10:29.475 "data_offset": 2048, 00:10:29.475 "data_size": 63488 00:10:29.475 }, 00:10:29.475 { 00:10:29.475 "name": "BaseBdev3", 00:10:29.475 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:29.475 "is_configured": true, 00:10:29.475 "data_offset": 2048, 00:10:29.475 "data_size": 63488 00:10:29.475 } 00:10:29.475 ] 00:10:29.475 }' 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.475 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.736 [2024-11-20 17:44:56.869687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.736 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.995 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.995 "name": "Existed_Raid", 00:10:29.995 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:29.995 "strip_size_kb": 0, 00:10:29.995 "state": "configuring", 00:10:29.995 "raid_level": "raid1", 00:10:29.995 "superblock": true, 00:10:29.995 "num_base_bdevs": 3, 00:10:29.995 "num_base_bdevs_discovered": 1, 00:10:29.995 "num_base_bdevs_operational": 3, 00:10:29.995 "base_bdevs_list": [ 00:10:29.995 { 00:10:29.995 "name": "BaseBdev1", 00:10:29.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.995 "is_configured": false, 00:10:29.995 "data_offset": 0, 00:10:29.995 "data_size": 0 00:10:29.995 }, 00:10:29.995 { 00:10:29.995 "name": null, 00:10:29.995 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:29.995 "is_configured": false, 00:10:29.995 "data_offset": 0, 00:10:29.995 "data_size": 63488 00:10:29.995 }, 00:10:29.995 { 00:10:29.995 "name": "BaseBdev3", 00:10:29.995 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:29.995 "is_configured": true, 00:10:29.995 "data_offset": 2048, 00:10:29.995 "data_size": 63488 00:10:29.995 } 00:10:29.995 ] 00:10:29.995 }' 00:10:29.995 17:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.995 17:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 [2024-11-20 17:44:57.378372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.255 BaseBdev1 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 [ 00:10:30.255 { 00:10:30.255 "name": "BaseBdev1", 00:10:30.255 "aliases": [ 00:10:30.255 "843a3f67-7ec3-414c-bb95-7cd7752b57fd" 00:10:30.255 ], 00:10:30.255 "product_name": "Malloc disk", 00:10:30.255 "block_size": 512, 00:10:30.255 "num_blocks": 65536, 00:10:30.255 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:30.255 "assigned_rate_limits": { 00:10:30.255 "rw_ios_per_sec": 0, 00:10:30.255 "rw_mbytes_per_sec": 0, 00:10:30.255 "r_mbytes_per_sec": 0, 00:10:30.255 "w_mbytes_per_sec": 0 00:10:30.255 }, 00:10:30.255 "claimed": true, 00:10:30.255 "claim_type": "exclusive_write", 00:10:30.255 "zoned": false, 00:10:30.255 "supported_io_types": { 00:10:30.255 "read": true, 00:10:30.255 "write": true, 00:10:30.255 "unmap": true, 00:10:30.255 "flush": true, 00:10:30.255 "reset": true, 00:10:30.255 "nvme_admin": false, 00:10:30.255 "nvme_io": false, 00:10:30.255 "nvme_io_md": false, 00:10:30.255 "write_zeroes": true, 00:10:30.255 "zcopy": true, 00:10:30.255 "get_zone_info": false, 00:10:30.255 "zone_management": false, 00:10:30.255 "zone_append": false, 00:10:30.255 "compare": false, 00:10:30.255 "compare_and_write": false, 00:10:30.255 "abort": true, 00:10:30.255 "seek_hole": false, 00:10:30.255 "seek_data": false, 00:10:30.255 "copy": true, 00:10:30.255 "nvme_iov_md": false 00:10:30.255 }, 00:10:30.255 "memory_domains": [ 00:10:30.255 { 00:10:30.255 "dma_device_id": "system", 00:10:30.255 "dma_device_type": 1 00:10:30.255 }, 00:10:30.255 { 00:10:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.255 "dma_device_type": 2 00:10:30.255 } 00:10:30.255 ], 00:10:30.255 "driver_specific": {} 00:10:30.255 } 00:10:30.255 ] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.514 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.514 "name": "Existed_Raid", 00:10:30.514 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:30.514 "strip_size_kb": 0, 00:10:30.514 "state": "configuring", 00:10:30.514 "raid_level": "raid1", 00:10:30.514 "superblock": true, 00:10:30.514 "num_base_bdevs": 3, 00:10:30.514 "num_base_bdevs_discovered": 2, 00:10:30.514 "num_base_bdevs_operational": 3, 00:10:30.514 "base_bdevs_list": [ 00:10:30.514 { 00:10:30.514 "name": "BaseBdev1", 00:10:30.514 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:30.514 "is_configured": true, 00:10:30.514 "data_offset": 2048, 00:10:30.514 "data_size": 63488 00:10:30.514 }, 00:10:30.514 { 00:10:30.514 "name": null, 00:10:30.514 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:30.514 "is_configured": false, 00:10:30.514 "data_offset": 0, 00:10:30.514 "data_size": 63488 00:10:30.514 }, 00:10:30.514 { 00:10:30.514 "name": "BaseBdev3", 00:10:30.514 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:30.514 "is_configured": true, 00:10:30.514 "data_offset": 2048, 00:10:30.514 "data_size": 63488 00:10:30.514 } 00:10:30.514 ] 00:10:30.514 }' 00:10:30.514 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.514 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 [2024-11-20 17:44:57.945517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.032 "name": "Existed_Raid", 00:10:31.032 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:31.032 "strip_size_kb": 0, 00:10:31.032 "state": "configuring", 00:10:31.032 "raid_level": "raid1", 00:10:31.032 "superblock": true, 00:10:31.032 "num_base_bdevs": 3, 00:10:31.032 "num_base_bdevs_discovered": 1, 00:10:31.032 "num_base_bdevs_operational": 3, 00:10:31.032 "base_bdevs_list": [ 00:10:31.032 { 00:10:31.032 "name": "BaseBdev1", 00:10:31.032 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:31.032 "is_configured": true, 00:10:31.032 "data_offset": 2048, 00:10:31.032 "data_size": 63488 00:10:31.032 }, 00:10:31.032 { 00:10:31.032 "name": null, 00:10:31.032 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:31.032 "is_configured": false, 00:10:31.032 "data_offset": 0, 00:10:31.032 "data_size": 63488 00:10:31.032 }, 00:10:31.032 { 00:10:31.032 "name": null, 00:10:31.032 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:31.032 "is_configured": false, 00:10:31.032 "data_offset": 0, 00:10:31.032 "data_size": 63488 00:10:31.032 } 00:10:31.032 ] 00:10:31.032 }' 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.032 17:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.292 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.292 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.292 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.292 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.292 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.551 [2024-11-20 17:44:58.476709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.551 "name": "Existed_Raid", 00:10:31.551 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:31.551 "strip_size_kb": 0, 00:10:31.551 "state": "configuring", 00:10:31.551 "raid_level": "raid1", 00:10:31.551 "superblock": true, 00:10:31.551 "num_base_bdevs": 3, 00:10:31.551 "num_base_bdevs_discovered": 2, 00:10:31.551 "num_base_bdevs_operational": 3, 00:10:31.551 "base_bdevs_list": [ 00:10:31.551 { 00:10:31.551 "name": "BaseBdev1", 00:10:31.551 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:31.551 "is_configured": true, 00:10:31.551 "data_offset": 2048, 00:10:31.551 "data_size": 63488 00:10:31.551 }, 00:10:31.551 { 00:10:31.551 "name": null, 00:10:31.551 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:31.551 "is_configured": false, 00:10:31.551 "data_offset": 0, 00:10:31.551 "data_size": 63488 00:10:31.551 }, 00:10:31.551 { 00:10:31.551 "name": "BaseBdev3", 00:10:31.551 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:31.551 "is_configured": true, 00:10:31.551 "data_offset": 2048, 00:10:31.551 "data_size": 63488 00:10:31.551 } 00:10:31.551 ] 00:10:31.551 }' 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.551 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.810 17:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 [2024-11-20 17:44:58.983924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.070 "name": "Existed_Raid", 00:10:32.070 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:32.070 "strip_size_kb": 0, 00:10:32.070 "state": "configuring", 00:10:32.070 "raid_level": "raid1", 00:10:32.070 "superblock": true, 00:10:32.070 "num_base_bdevs": 3, 00:10:32.070 "num_base_bdevs_discovered": 1, 00:10:32.070 "num_base_bdevs_operational": 3, 00:10:32.070 "base_bdevs_list": [ 00:10:32.070 { 00:10:32.070 "name": null, 00:10:32.070 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:32.070 "is_configured": false, 00:10:32.070 "data_offset": 0, 00:10:32.070 "data_size": 63488 00:10:32.070 }, 00:10:32.070 { 00:10:32.070 "name": null, 00:10:32.070 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:32.070 "is_configured": false, 00:10:32.070 "data_offset": 0, 00:10:32.070 "data_size": 63488 00:10:32.070 }, 00:10:32.070 { 00:10:32.070 "name": "BaseBdev3", 00:10:32.070 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:32.070 "is_configured": true, 00:10:32.070 "data_offset": 2048, 00:10:32.070 "data_size": 63488 00:10:32.070 } 00:10:32.070 ] 00:10:32.070 }' 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.070 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 [2024-11-20 17:44:59.594035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.639 "name": "Existed_Raid", 00:10:32.639 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:32.639 "strip_size_kb": 0, 00:10:32.639 "state": "configuring", 00:10:32.639 "raid_level": "raid1", 00:10:32.639 "superblock": true, 00:10:32.639 "num_base_bdevs": 3, 00:10:32.639 "num_base_bdevs_discovered": 2, 00:10:32.639 "num_base_bdevs_operational": 3, 00:10:32.639 "base_bdevs_list": [ 00:10:32.639 { 00:10:32.639 "name": null, 00:10:32.639 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:32.639 "is_configured": false, 00:10:32.639 "data_offset": 0, 00:10:32.639 "data_size": 63488 00:10:32.639 }, 00:10:32.639 { 00:10:32.639 "name": "BaseBdev2", 00:10:32.639 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:32.639 "is_configured": true, 00:10:32.639 "data_offset": 2048, 00:10:32.639 "data_size": 63488 00:10:32.639 }, 00:10:32.639 { 00:10:32.639 "name": "BaseBdev3", 00:10:32.639 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:32.639 "is_configured": true, 00:10:32.639 "data_offset": 2048, 00:10:32.639 "data_size": 63488 00:10:32.639 } 00:10:32.639 ] 00:10:32.639 }' 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.639 17:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.898 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.898 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.898 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.898 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.898 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 843a3f67-7ec3-414c-bb95-7cd7752b57fd 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.214 [2024-11-20 17:45:00.184644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.214 [2024-11-20 17:45:00.184965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.214 [2024-11-20 17:45:00.184980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.214 [2024-11-20 17:45:00.185306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:33.214 NewBaseBdev 00:10:33.214 [2024-11-20 17:45:00.185469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.214 [2024-11-20 17:45:00.185490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:33.214 [2024-11-20 17:45:00.185641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.214 [ 00:10:33.214 { 00:10:33.214 "name": "NewBaseBdev", 00:10:33.214 "aliases": [ 00:10:33.214 "843a3f67-7ec3-414c-bb95-7cd7752b57fd" 00:10:33.214 ], 00:10:33.214 "product_name": "Malloc disk", 00:10:33.214 "block_size": 512, 00:10:33.214 "num_blocks": 65536, 00:10:33.214 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:33.214 "assigned_rate_limits": { 00:10:33.214 "rw_ios_per_sec": 0, 00:10:33.214 "rw_mbytes_per_sec": 0, 00:10:33.214 "r_mbytes_per_sec": 0, 00:10:33.214 "w_mbytes_per_sec": 0 00:10:33.214 }, 00:10:33.214 "claimed": true, 00:10:33.214 "claim_type": "exclusive_write", 00:10:33.214 "zoned": false, 00:10:33.214 "supported_io_types": { 00:10:33.214 "read": true, 00:10:33.214 "write": true, 00:10:33.214 "unmap": true, 00:10:33.214 "flush": true, 00:10:33.214 "reset": true, 00:10:33.214 "nvme_admin": false, 00:10:33.214 "nvme_io": false, 00:10:33.214 "nvme_io_md": false, 00:10:33.214 "write_zeroes": true, 00:10:33.214 "zcopy": true, 00:10:33.214 "get_zone_info": false, 00:10:33.214 "zone_management": false, 00:10:33.214 "zone_append": false, 00:10:33.214 "compare": false, 00:10:33.214 "compare_and_write": false, 00:10:33.214 "abort": true, 00:10:33.214 "seek_hole": false, 00:10:33.214 "seek_data": false, 00:10:33.214 "copy": true, 00:10:33.214 "nvme_iov_md": false 00:10:33.214 }, 00:10:33.214 "memory_domains": [ 00:10:33.214 { 00:10:33.214 "dma_device_id": "system", 00:10:33.214 "dma_device_type": 1 00:10:33.214 }, 00:10:33.214 { 00:10:33.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.214 "dma_device_type": 2 00:10:33.214 } 00:10:33.214 ], 00:10:33.214 "driver_specific": {} 00:10:33.214 } 00:10:33.214 ] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.214 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.215 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.215 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.215 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.215 "name": "Existed_Raid", 00:10:33.215 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:33.215 "strip_size_kb": 0, 00:10:33.215 "state": "online", 00:10:33.215 "raid_level": "raid1", 00:10:33.215 "superblock": true, 00:10:33.215 "num_base_bdevs": 3, 00:10:33.215 "num_base_bdevs_discovered": 3, 00:10:33.215 "num_base_bdevs_operational": 3, 00:10:33.215 "base_bdevs_list": [ 00:10:33.215 { 00:10:33.215 "name": "NewBaseBdev", 00:10:33.215 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:33.215 "is_configured": true, 00:10:33.215 "data_offset": 2048, 00:10:33.215 "data_size": 63488 00:10:33.215 }, 00:10:33.215 { 00:10:33.215 "name": "BaseBdev2", 00:10:33.215 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:33.215 "is_configured": true, 00:10:33.215 "data_offset": 2048, 00:10:33.215 "data_size": 63488 00:10:33.215 }, 00:10:33.215 { 00:10:33.215 "name": "BaseBdev3", 00:10:33.215 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:33.215 "is_configured": true, 00:10:33.215 "data_offset": 2048, 00:10:33.215 "data_size": 63488 00:10:33.215 } 00:10:33.215 ] 00:10:33.215 }' 00:10:33.215 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.215 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.781 [2024-11-20 17:45:00.708199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.781 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.781 "name": "Existed_Raid", 00:10:33.781 "aliases": [ 00:10:33.781 "186875e4-285e-4310-bb1c-a518358f21c0" 00:10:33.781 ], 00:10:33.781 "product_name": "Raid Volume", 00:10:33.781 "block_size": 512, 00:10:33.781 "num_blocks": 63488, 00:10:33.781 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:33.781 "assigned_rate_limits": { 00:10:33.781 "rw_ios_per_sec": 0, 00:10:33.781 "rw_mbytes_per_sec": 0, 00:10:33.781 "r_mbytes_per_sec": 0, 00:10:33.781 "w_mbytes_per_sec": 0 00:10:33.781 }, 00:10:33.781 "claimed": false, 00:10:33.781 "zoned": false, 00:10:33.781 "supported_io_types": { 00:10:33.781 "read": true, 00:10:33.781 "write": true, 00:10:33.781 "unmap": false, 00:10:33.781 "flush": false, 00:10:33.781 "reset": true, 00:10:33.781 "nvme_admin": false, 00:10:33.781 "nvme_io": false, 00:10:33.781 "nvme_io_md": false, 00:10:33.781 "write_zeroes": true, 00:10:33.781 "zcopy": false, 00:10:33.781 "get_zone_info": false, 00:10:33.782 "zone_management": false, 00:10:33.782 "zone_append": false, 00:10:33.782 "compare": false, 00:10:33.782 "compare_and_write": false, 00:10:33.782 "abort": false, 00:10:33.782 "seek_hole": false, 00:10:33.782 "seek_data": false, 00:10:33.782 "copy": false, 00:10:33.782 "nvme_iov_md": false 00:10:33.782 }, 00:10:33.782 "memory_domains": [ 00:10:33.782 { 00:10:33.782 "dma_device_id": "system", 00:10:33.782 "dma_device_type": 1 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.782 "dma_device_type": 2 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "dma_device_id": "system", 00:10:33.782 "dma_device_type": 1 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.782 "dma_device_type": 2 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "dma_device_id": "system", 00:10:33.782 "dma_device_type": 1 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.782 "dma_device_type": 2 00:10:33.782 } 00:10:33.782 ], 00:10:33.782 "driver_specific": { 00:10:33.782 "raid": { 00:10:33.782 "uuid": "186875e4-285e-4310-bb1c-a518358f21c0", 00:10:33.782 "strip_size_kb": 0, 00:10:33.782 "state": "online", 00:10:33.782 "raid_level": "raid1", 00:10:33.782 "superblock": true, 00:10:33.782 "num_base_bdevs": 3, 00:10:33.782 "num_base_bdevs_discovered": 3, 00:10:33.782 "num_base_bdevs_operational": 3, 00:10:33.782 "base_bdevs_list": [ 00:10:33.782 { 00:10:33.782 "name": "NewBaseBdev", 00:10:33.782 "uuid": "843a3f67-7ec3-414c-bb95-7cd7752b57fd", 00:10:33.782 "is_configured": true, 00:10:33.782 "data_offset": 2048, 00:10:33.782 "data_size": 63488 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "name": "BaseBdev2", 00:10:33.782 "uuid": "da509818-ddde-4434-a05a-3269de3fbdc1", 00:10:33.782 "is_configured": true, 00:10:33.782 "data_offset": 2048, 00:10:33.782 "data_size": 63488 00:10:33.782 }, 00:10:33.782 { 00:10:33.782 "name": "BaseBdev3", 00:10:33.782 "uuid": "5b89fa53-ff92-4a69-94a4-eee4966a28a5", 00:10:33.782 "is_configured": true, 00:10:33.782 "data_offset": 2048, 00:10:33.782 "data_size": 63488 00:10:33.782 } 00:10:33.782 ] 00:10:33.782 } 00:10:33.782 } 00:10:33.782 }' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.782 BaseBdev2 00:10:33.782 BaseBdev3' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.782 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.040 [2024-11-20 17:45:00.959353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.040 [2024-11-20 17:45:00.959484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.040 [2024-11-20 17:45:00.959597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.040 [2024-11-20 17:45:00.959947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.040 [2024-11-20 17:45:00.960000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68420 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68420 ']' 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68420 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.040 17:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68420 00:10:34.040 killing process with pid 68420 00:10:34.040 17:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.040 17:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.040 17:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68420' 00:10:34.040 17:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68420 00:10:34.040 [2024-11-20 17:45:01.006417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.040 17:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68420 00:10:34.297 [2024-11-20 17:45:01.374131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.673 17:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.673 00:10:35.673 real 0m11.192s 00:10:35.673 user 0m17.357s 00:10:35.673 sys 0m2.107s 00:10:35.673 17:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.673 ************************************ 00:10:35.673 END TEST raid_state_function_test_sb 00:10:35.673 ************************************ 00:10:35.673 17:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.673 17:45:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:35.673 17:45:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:35.673 17:45:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.674 17:45:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.674 ************************************ 00:10:35.674 START TEST raid_superblock_test 00:10:35.674 ************************************ 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69046 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69046 00:10:35.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69046 ']' 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:35.674 17:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.932 [2024-11-20 17:45:02.889635] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:35.932 [2024-11-20 17:45:02.889761] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69046 ] 00:10:35.932 [2024-11-20 17:45:03.046284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.192 [2024-11-20 17:45:03.193579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.450 [2024-11-20 17:45:03.445437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.450 [2024-11-20 17:45:03.445522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.708 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.708 malloc1 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.709 [2024-11-20 17:45:03.845153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:36.709 [2024-11-20 17:45:03.845370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.709 [2024-11-20 17:45:03.845404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.709 [2024-11-20 17:45:03.845416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.709 [2024-11-20 17:45:03.848009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.709 [2024-11-20 17:45:03.848059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:36.709 pt1 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.709 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.967 malloc2 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.967 [2024-11-20 17:45:03.909215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.967 [2024-11-20 17:45:03.909398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.967 [2024-11-20 17:45:03.909452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.967 [2024-11-20 17:45:03.909526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.967 [2024-11-20 17:45:03.911993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.967 [2024-11-20 17:45:03.912094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.967 pt2 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.967 malloc3 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.967 [2024-11-20 17:45:03.987948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.967 [2024-11-20 17:45:03.988117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.967 [2024-11-20 17:45:03.988165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.967 [2024-11-20 17:45:03.988201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.967 [2024-11-20 17:45:03.990837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.967 [2024-11-20 17:45:03.990917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.967 pt3 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.967 17:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.967 [2024-11-20 17:45:03.999979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.967 [2024-11-20 17:45:04.002382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.967 [2024-11-20 17:45:04.002513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.967 [2024-11-20 17:45:04.002695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:36.967 [2024-11-20 17:45:04.002714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.967 [2024-11-20 17:45:04.002968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:36.967 [2024-11-20 17:45:04.003183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:36.967 [2024-11-20 17:45:04.003198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:36.967 [2024-11-20 17:45:04.003372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.967 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.967 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.967 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.967 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.968 "name": "raid_bdev1", 00:10:36.968 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:36.968 "strip_size_kb": 0, 00:10:36.968 "state": "online", 00:10:36.968 "raid_level": "raid1", 00:10:36.968 "superblock": true, 00:10:36.968 "num_base_bdevs": 3, 00:10:36.968 "num_base_bdevs_discovered": 3, 00:10:36.968 "num_base_bdevs_operational": 3, 00:10:36.968 "base_bdevs_list": [ 00:10:36.968 { 00:10:36.968 "name": "pt1", 00:10:36.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.968 "is_configured": true, 00:10:36.968 "data_offset": 2048, 00:10:36.968 "data_size": 63488 00:10:36.968 }, 00:10:36.968 { 00:10:36.968 "name": "pt2", 00:10:36.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.968 "is_configured": true, 00:10:36.968 "data_offset": 2048, 00:10:36.968 "data_size": 63488 00:10:36.968 }, 00:10:36.968 { 00:10:36.968 "name": "pt3", 00:10:36.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.968 "is_configured": true, 00:10:36.968 "data_offset": 2048, 00:10:36.968 "data_size": 63488 00:10:36.968 } 00:10:36.968 ] 00:10:36.968 }' 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.968 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.534 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.534 [2024-11-20 17:45:04.491496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.535 "name": "raid_bdev1", 00:10:37.535 "aliases": [ 00:10:37.535 "6d492d02-4f4a-4959-890e-3c3b8b9e035a" 00:10:37.535 ], 00:10:37.535 "product_name": "Raid Volume", 00:10:37.535 "block_size": 512, 00:10:37.535 "num_blocks": 63488, 00:10:37.535 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:37.535 "assigned_rate_limits": { 00:10:37.535 "rw_ios_per_sec": 0, 00:10:37.535 "rw_mbytes_per_sec": 0, 00:10:37.535 "r_mbytes_per_sec": 0, 00:10:37.535 "w_mbytes_per_sec": 0 00:10:37.535 }, 00:10:37.535 "claimed": false, 00:10:37.535 "zoned": false, 00:10:37.535 "supported_io_types": { 00:10:37.535 "read": true, 00:10:37.535 "write": true, 00:10:37.535 "unmap": false, 00:10:37.535 "flush": false, 00:10:37.535 "reset": true, 00:10:37.535 "nvme_admin": false, 00:10:37.535 "nvme_io": false, 00:10:37.535 "nvme_io_md": false, 00:10:37.535 "write_zeroes": true, 00:10:37.535 "zcopy": false, 00:10:37.535 "get_zone_info": false, 00:10:37.535 "zone_management": false, 00:10:37.535 "zone_append": false, 00:10:37.535 "compare": false, 00:10:37.535 "compare_and_write": false, 00:10:37.535 "abort": false, 00:10:37.535 "seek_hole": false, 00:10:37.535 "seek_data": false, 00:10:37.535 "copy": false, 00:10:37.535 "nvme_iov_md": false 00:10:37.535 }, 00:10:37.535 "memory_domains": [ 00:10:37.535 { 00:10:37.535 "dma_device_id": "system", 00:10:37.535 "dma_device_type": 1 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.535 "dma_device_type": 2 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "dma_device_id": "system", 00:10:37.535 "dma_device_type": 1 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.535 "dma_device_type": 2 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "dma_device_id": "system", 00:10:37.535 "dma_device_type": 1 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.535 "dma_device_type": 2 00:10:37.535 } 00:10:37.535 ], 00:10:37.535 "driver_specific": { 00:10:37.535 "raid": { 00:10:37.535 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:37.535 "strip_size_kb": 0, 00:10:37.535 "state": "online", 00:10:37.535 "raid_level": "raid1", 00:10:37.535 "superblock": true, 00:10:37.535 "num_base_bdevs": 3, 00:10:37.535 "num_base_bdevs_discovered": 3, 00:10:37.535 "num_base_bdevs_operational": 3, 00:10:37.535 "base_bdevs_list": [ 00:10:37.535 { 00:10:37.535 "name": "pt1", 00:10:37.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.535 "is_configured": true, 00:10:37.535 "data_offset": 2048, 00:10:37.535 "data_size": 63488 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "name": "pt2", 00:10:37.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.535 "is_configured": true, 00:10:37.535 "data_offset": 2048, 00:10:37.535 "data_size": 63488 00:10:37.535 }, 00:10:37.535 { 00:10:37.535 "name": "pt3", 00:10:37.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.535 "is_configured": true, 00:10:37.535 "data_offset": 2048, 00:10:37.535 "data_size": 63488 00:10:37.535 } 00:10:37.535 ] 00:10:37.535 } 00:10:37.535 } 00:10:37.535 }' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:37.535 pt2 00:10:37.535 pt3' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.535 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.794 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.794 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.794 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.794 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.794 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:37.794 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 [2024-11-20 17:45:04.778870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6d492d02-4f4a-4959-890e-3c3b8b9e035a 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6d492d02-4f4a-4959-890e-3c3b8b9e035a ']' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 [2024-11-20 17:45:04.826539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.795 [2024-11-20 17:45:04.826667] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.795 [2024-11-20 17:45:04.826787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.795 [2024-11-20 17:45:04.826898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.795 [2024-11-20 17:45:04.826952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.795 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 [2024-11-20 17:45:04.978287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:38.055 [2024-11-20 17:45:04.980511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:38.055 [2024-11-20 17:45:04.980623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:38.055 [2024-11-20 17:45:04.980714] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:38.055 [2024-11-20 17:45:04.980826] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:38.055 [2024-11-20 17:45:04.980882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:38.055 [2024-11-20 17:45:04.980949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.055 [2024-11-20 17:45:04.980980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:38.055 request: 00:10:38.055 { 00:10:38.055 "name": "raid_bdev1", 00:10:38.055 "raid_level": "raid1", 00:10:38.055 "base_bdevs": [ 00:10:38.055 "malloc1", 00:10:38.055 "malloc2", 00:10:38.055 "malloc3" 00:10:38.055 ], 00:10:38.055 "superblock": false, 00:10:38.055 "method": "bdev_raid_create", 00:10:38.055 "req_id": 1 00:10:38.055 } 00:10:38.055 Got JSON-RPC error response 00:10:38.055 response: 00:10:38.055 { 00:10:38.055 "code": -17, 00:10:38.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:38.055 } 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 17:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 [2024-11-20 17:45:05.046156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.055 [2024-11-20 17:45:05.046298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.055 [2024-11-20 17:45:05.046337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:38.055 [2024-11-20 17:45:05.046364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.055 [2024-11-20 17:45:05.048969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.055 [2024-11-20 17:45:05.049077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.055 [2024-11-20 17:45:05.049204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:38.055 [2024-11-20 17:45:05.049280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.055 pt1 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.055 "name": "raid_bdev1", 00:10:38.055 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:38.055 "strip_size_kb": 0, 00:10:38.055 "state": "configuring", 00:10:38.055 "raid_level": "raid1", 00:10:38.055 "superblock": true, 00:10:38.055 "num_base_bdevs": 3, 00:10:38.055 "num_base_bdevs_discovered": 1, 00:10:38.055 "num_base_bdevs_operational": 3, 00:10:38.055 "base_bdevs_list": [ 00:10:38.055 { 00:10:38.055 "name": "pt1", 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.055 "is_configured": true, 00:10:38.055 "data_offset": 2048, 00:10:38.055 "data_size": 63488 00:10:38.055 }, 00:10:38.055 { 00:10:38.055 "name": null, 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.055 "is_configured": false, 00:10:38.055 "data_offset": 2048, 00:10:38.055 "data_size": 63488 00:10:38.055 }, 00:10:38.055 { 00:10:38.055 "name": null, 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.055 "is_configured": false, 00:10:38.055 "data_offset": 2048, 00:10:38.055 "data_size": 63488 00:10:38.055 } 00:10:38.055 ] 00:10:38.055 }' 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.055 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.622 [2024-11-20 17:45:05.537331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.622 [2024-11-20 17:45:05.537430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.622 [2024-11-20 17:45:05.537456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:38.622 [2024-11-20 17:45:05.537467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.622 [2024-11-20 17:45:05.537992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.622 [2024-11-20 17:45:05.538033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.622 [2024-11-20 17:45:05.538148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.622 [2024-11-20 17:45:05.538176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.622 pt2 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.622 [2024-11-20 17:45:05.549283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.622 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.623 "name": "raid_bdev1", 00:10:38.623 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:38.623 "strip_size_kb": 0, 00:10:38.623 "state": "configuring", 00:10:38.623 "raid_level": "raid1", 00:10:38.623 "superblock": true, 00:10:38.623 "num_base_bdevs": 3, 00:10:38.623 "num_base_bdevs_discovered": 1, 00:10:38.623 "num_base_bdevs_operational": 3, 00:10:38.623 "base_bdevs_list": [ 00:10:38.623 { 00:10:38.623 "name": "pt1", 00:10:38.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.623 "is_configured": true, 00:10:38.623 "data_offset": 2048, 00:10:38.623 "data_size": 63488 00:10:38.623 }, 00:10:38.623 { 00:10:38.623 "name": null, 00:10:38.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.623 "is_configured": false, 00:10:38.623 "data_offset": 0, 00:10:38.623 "data_size": 63488 00:10:38.623 }, 00:10:38.623 { 00:10:38.623 "name": null, 00:10:38.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.623 "is_configured": false, 00:10:38.623 "data_offset": 2048, 00:10:38.623 "data_size": 63488 00:10:38.623 } 00:10:38.623 ] 00:10:38.623 }' 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.623 17:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.882 [2024-11-20 17:45:06.032517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.882 [2024-11-20 17:45:06.032750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.882 [2024-11-20 17:45:06.032799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:38.882 [2024-11-20 17:45:06.032842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.882 [2024-11-20 17:45:06.033523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.882 [2024-11-20 17:45:06.033596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.882 [2024-11-20 17:45:06.033738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.882 [2024-11-20 17:45:06.033818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.882 pt2 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.882 [2024-11-20 17:45:06.044436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.882 [2024-11-20 17:45:06.044531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.882 [2024-11-20 17:45:06.044562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.882 [2024-11-20 17:45:06.044593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.882 [2024-11-20 17:45:06.045112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.882 [2024-11-20 17:45:06.045180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.882 [2024-11-20 17:45:06.045286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:38.882 [2024-11-20 17:45:06.045343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.882 [2024-11-20 17:45:06.045537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.882 [2024-11-20 17:45:06.045582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.882 [2024-11-20 17:45:06.045866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.882 [2024-11-20 17:45:06.046092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.882 [2024-11-20 17:45:06.046133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:38.882 [2024-11-20 17:45:06.046315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.882 pt3 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.882 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.142 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.142 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.142 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.142 "name": "raid_bdev1", 00:10:39.142 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:39.142 "strip_size_kb": 0, 00:10:39.142 "state": "online", 00:10:39.142 "raid_level": "raid1", 00:10:39.142 "superblock": true, 00:10:39.142 "num_base_bdevs": 3, 00:10:39.142 "num_base_bdevs_discovered": 3, 00:10:39.142 "num_base_bdevs_operational": 3, 00:10:39.142 "base_bdevs_list": [ 00:10:39.142 { 00:10:39.142 "name": "pt1", 00:10:39.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.142 "is_configured": true, 00:10:39.142 "data_offset": 2048, 00:10:39.142 "data_size": 63488 00:10:39.142 }, 00:10:39.142 { 00:10:39.142 "name": "pt2", 00:10:39.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.142 "is_configured": true, 00:10:39.142 "data_offset": 2048, 00:10:39.142 "data_size": 63488 00:10:39.142 }, 00:10:39.142 { 00:10:39.142 "name": "pt3", 00:10:39.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.142 "is_configured": true, 00:10:39.142 "data_offset": 2048, 00:10:39.142 "data_size": 63488 00:10:39.142 } 00:10:39.142 ] 00:10:39.142 }' 00:10:39.142 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.142 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.401 [2024-11-20 17:45:06.523975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.401 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.401 "name": "raid_bdev1", 00:10:39.401 "aliases": [ 00:10:39.401 "6d492d02-4f4a-4959-890e-3c3b8b9e035a" 00:10:39.401 ], 00:10:39.401 "product_name": "Raid Volume", 00:10:39.401 "block_size": 512, 00:10:39.401 "num_blocks": 63488, 00:10:39.401 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:39.401 "assigned_rate_limits": { 00:10:39.401 "rw_ios_per_sec": 0, 00:10:39.401 "rw_mbytes_per_sec": 0, 00:10:39.401 "r_mbytes_per_sec": 0, 00:10:39.401 "w_mbytes_per_sec": 0 00:10:39.401 }, 00:10:39.401 "claimed": false, 00:10:39.401 "zoned": false, 00:10:39.401 "supported_io_types": { 00:10:39.401 "read": true, 00:10:39.401 "write": true, 00:10:39.401 "unmap": false, 00:10:39.401 "flush": false, 00:10:39.401 "reset": true, 00:10:39.401 "nvme_admin": false, 00:10:39.401 "nvme_io": false, 00:10:39.401 "nvme_io_md": false, 00:10:39.401 "write_zeroes": true, 00:10:39.401 "zcopy": false, 00:10:39.401 "get_zone_info": false, 00:10:39.401 "zone_management": false, 00:10:39.401 "zone_append": false, 00:10:39.401 "compare": false, 00:10:39.401 "compare_and_write": false, 00:10:39.401 "abort": false, 00:10:39.401 "seek_hole": false, 00:10:39.401 "seek_data": false, 00:10:39.401 "copy": false, 00:10:39.401 "nvme_iov_md": false 00:10:39.401 }, 00:10:39.401 "memory_domains": [ 00:10:39.401 { 00:10:39.401 "dma_device_id": "system", 00:10:39.401 "dma_device_type": 1 00:10:39.401 }, 00:10:39.401 { 00:10:39.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.401 "dma_device_type": 2 00:10:39.401 }, 00:10:39.401 { 00:10:39.401 "dma_device_id": "system", 00:10:39.401 "dma_device_type": 1 00:10:39.401 }, 00:10:39.401 { 00:10:39.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.401 "dma_device_type": 2 00:10:39.401 }, 00:10:39.401 { 00:10:39.401 "dma_device_id": "system", 00:10:39.401 "dma_device_type": 1 00:10:39.401 }, 00:10:39.401 { 00:10:39.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.401 "dma_device_type": 2 00:10:39.401 } 00:10:39.401 ], 00:10:39.401 "driver_specific": { 00:10:39.401 "raid": { 00:10:39.401 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:39.401 "strip_size_kb": 0, 00:10:39.401 "state": "online", 00:10:39.401 "raid_level": "raid1", 00:10:39.401 "superblock": true, 00:10:39.401 "num_base_bdevs": 3, 00:10:39.401 "num_base_bdevs_discovered": 3, 00:10:39.401 "num_base_bdevs_operational": 3, 00:10:39.401 "base_bdevs_list": [ 00:10:39.401 { 00:10:39.401 "name": "pt1", 00:10:39.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.401 "is_configured": true, 00:10:39.401 "data_offset": 2048, 00:10:39.401 "data_size": 63488 00:10:39.401 }, 00:10:39.401 { 00:10:39.401 "name": "pt2", 00:10:39.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.401 "is_configured": true, 00:10:39.401 "data_offset": 2048, 00:10:39.402 "data_size": 63488 00:10:39.402 }, 00:10:39.402 { 00:10:39.402 "name": "pt3", 00:10:39.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.402 "is_configured": true, 00:10:39.402 "data_offset": 2048, 00:10:39.402 "data_size": 63488 00:10:39.402 } 00:10:39.402 ] 00:10:39.402 } 00:10:39.402 } 00:10:39.402 }' 00:10:39.402 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:39.660 pt2 00:10:39.660 pt3' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.660 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.661 [2024-11-20 17:45:06.807434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.661 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6d492d02-4f4a-4959-890e-3c3b8b9e035a '!=' 6d492d02-4f4a-4959-890e-3c3b8b9e035a ']' 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.920 [2024-11-20 17:45:06.851200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.920 "name": "raid_bdev1", 00:10:39.920 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:39.920 "strip_size_kb": 0, 00:10:39.920 "state": "online", 00:10:39.920 "raid_level": "raid1", 00:10:39.920 "superblock": true, 00:10:39.920 "num_base_bdevs": 3, 00:10:39.920 "num_base_bdevs_discovered": 2, 00:10:39.920 "num_base_bdevs_operational": 2, 00:10:39.920 "base_bdevs_list": [ 00:10:39.920 { 00:10:39.920 "name": null, 00:10:39.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.920 "is_configured": false, 00:10:39.920 "data_offset": 0, 00:10:39.920 "data_size": 63488 00:10:39.920 }, 00:10:39.920 { 00:10:39.920 "name": "pt2", 00:10:39.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.920 "is_configured": true, 00:10:39.920 "data_offset": 2048, 00:10:39.920 "data_size": 63488 00:10:39.920 }, 00:10:39.920 { 00:10:39.920 "name": "pt3", 00:10:39.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.920 "is_configured": true, 00:10:39.920 "data_offset": 2048, 00:10:39.920 "data_size": 63488 00:10:39.920 } 00:10:39.920 ] 00:10:39.920 }' 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.920 17:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.178 [2024-11-20 17:45:07.278428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.178 [2024-11-20 17:45:07.278565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.178 [2024-11-20 17:45:07.278715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.178 [2024-11-20 17:45:07.278818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.178 [2024-11-20 17:45:07.278874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.178 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.437 [2024-11-20 17:45:07.366207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:40.437 [2024-11-20 17:45:07.366302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.437 [2024-11-20 17:45:07.366335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:40.437 [2024-11-20 17:45:07.366349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.437 [2024-11-20 17:45:07.369249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.437 [2024-11-20 17:45:07.369397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:40.437 [2024-11-20 17:45:07.369518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:40.437 [2024-11-20 17:45:07.369585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.437 pt2 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.437 "name": "raid_bdev1", 00:10:40.437 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:40.437 "strip_size_kb": 0, 00:10:40.437 "state": "configuring", 00:10:40.437 "raid_level": "raid1", 00:10:40.437 "superblock": true, 00:10:40.437 "num_base_bdevs": 3, 00:10:40.437 "num_base_bdevs_discovered": 1, 00:10:40.437 "num_base_bdevs_operational": 2, 00:10:40.437 "base_bdevs_list": [ 00:10:40.437 { 00:10:40.437 "name": null, 00:10:40.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.437 "is_configured": false, 00:10:40.437 "data_offset": 2048, 00:10:40.437 "data_size": 63488 00:10:40.437 }, 00:10:40.437 { 00:10:40.437 "name": "pt2", 00:10:40.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.437 "is_configured": true, 00:10:40.437 "data_offset": 2048, 00:10:40.437 "data_size": 63488 00:10:40.437 }, 00:10:40.437 { 00:10:40.437 "name": null, 00:10:40.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.437 "is_configured": false, 00:10:40.437 "data_offset": 2048, 00:10:40.437 "data_size": 63488 00:10:40.437 } 00:10:40.437 ] 00:10:40.437 }' 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.437 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.695 [2024-11-20 17:45:07.785613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:40.695 [2024-11-20 17:45:07.785832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.695 [2024-11-20 17:45:07.785863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:40.695 [2024-11-20 17:45:07.785877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.695 [2024-11-20 17:45:07.786513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.695 [2024-11-20 17:45:07.786547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:40.695 [2024-11-20 17:45:07.786688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:40.695 [2024-11-20 17:45:07.786724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:40.695 [2024-11-20 17:45:07.786910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.695 [2024-11-20 17:45:07.786924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.695 [2024-11-20 17:45:07.787261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:40.695 [2024-11-20 17:45:07.787481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.695 [2024-11-20 17:45:07.787493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:40.695 [2024-11-20 17:45:07.787670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.695 pt3 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.695 "name": "raid_bdev1", 00:10:40.695 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:40.695 "strip_size_kb": 0, 00:10:40.695 "state": "online", 00:10:40.695 "raid_level": "raid1", 00:10:40.695 "superblock": true, 00:10:40.695 "num_base_bdevs": 3, 00:10:40.695 "num_base_bdevs_discovered": 2, 00:10:40.695 "num_base_bdevs_operational": 2, 00:10:40.695 "base_bdevs_list": [ 00:10:40.695 { 00:10:40.695 "name": null, 00:10:40.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.695 "is_configured": false, 00:10:40.695 "data_offset": 2048, 00:10:40.695 "data_size": 63488 00:10:40.695 }, 00:10:40.695 { 00:10:40.695 "name": "pt2", 00:10:40.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.695 "is_configured": true, 00:10:40.695 "data_offset": 2048, 00:10:40.695 "data_size": 63488 00:10:40.695 }, 00:10:40.695 { 00:10:40.695 "name": "pt3", 00:10:40.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.695 "is_configured": true, 00:10:40.695 "data_offset": 2048, 00:10:40.695 "data_size": 63488 00:10:40.695 } 00:10:40.695 ] 00:10:40.695 }' 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.695 17:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.262 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.263 [2024-11-20 17:45:08.240886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.263 [2024-11-20 17:45:08.241049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.263 [2024-11-20 17:45:08.241190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.263 [2024-11-20 17:45:08.241305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.263 [2024-11-20 17:45:08.241356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.263 [2024-11-20 17:45:08.308806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:41.263 [2024-11-20 17:45:08.308978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.263 [2024-11-20 17:45:08.309007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:41.263 [2024-11-20 17:45:08.309029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.263 [2024-11-20 17:45:08.311916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.263 [2024-11-20 17:45:08.311960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:41.263 [2024-11-20 17:45:08.312081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:41.263 [2024-11-20 17:45:08.312159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:41.263 [2024-11-20 17:45:08.312349] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:41.263 [2024-11-20 17:45:08.312361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.263 [2024-11-20 17:45:08.312382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:41.263 [2024-11-20 17:45:08.312453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.263 pt1 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.263 "name": "raid_bdev1", 00:10:41.263 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:41.263 "strip_size_kb": 0, 00:10:41.263 "state": "configuring", 00:10:41.263 "raid_level": "raid1", 00:10:41.263 "superblock": true, 00:10:41.263 "num_base_bdevs": 3, 00:10:41.263 "num_base_bdevs_discovered": 1, 00:10:41.263 "num_base_bdevs_operational": 2, 00:10:41.263 "base_bdevs_list": [ 00:10:41.263 { 00:10:41.263 "name": null, 00:10:41.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.263 "is_configured": false, 00:10:41.263 "data_offset": 2048, 00:10:41.263 "data_size": 63488 00:10:41.263 }, 00:10:41.263 { 00:10:41.263 "name": "pt2", 00:10:41.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.263 "is_configured": true, 00:10:41.263 "data_offset": 2048, 00:10:41.263 "data_size": 63488 00:10:41.263 }, 00:10:41.263 { 00:10:41.263 "name": null, 00:10:41.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.263 "is_configured": false, 00:10:41.263 "data_offset": 2048, 00:10:41.263 "data_size": 63488 00:10:41.263 } 00:10:41.263 ] 00:10:41.263 }' 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.263 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.832 [2024-11-20 17:45:08.811977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:41.832 [2024-11-20 17:45:08.812214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.832 [2024-11-20 17:45:08.812282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:41.832 [2024-11-20 17:45:08.812323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.832 [2024-11-20 17:45:08.813077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.832 [2024-11-20 17:45:08.813148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:41.832 [2024-11-20 17:45:08.813306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:41.832 [2024-11-20 17:45:08.813367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:41.832 [2024-11-20 17:45:08.813574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:41.832 [2024-11-20 17:45:08.813621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.832 [2024-11-20 17:45:08.813952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:41.832 [2024-11-20 17:45:08.814187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:41.832 [2024-11-20 17:45:08.814242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:41.832 [2024-11-20 17:45:08.814446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.832 pt3 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.832 "name": "raid_bdev1", 00:10:41.832 "uuid": "6d492d02-4f4a-4959-890e-3c3b8b9e035a", 00:10:41.832 "strip_size_kb": 0, 00:10:41.832 "state": "online", 00:10:41.832 "raid_level": "raid1", 00:10:41.832 "superblock": true, 00:10:41.832 "num_base_bdevs": 3, 00:10:41.832 "num_base_bdevs_discovered": 2, 00:10:41.832 "num_base_bdevs_operational": 2, 00:10:41.832 "base_bdevs_list": [ 00:10:41.832 { 00:10:41.832 "name": null, 00:10:41.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.832 "is_configured": false, 00:10:41.832 "data_offset": 2048, 00:10:41.832 "data_size": 63488 00:10:41.832 }, 00:10:41.832 { 00:10:41.832 "name": "pt2", 00:10:41.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.832 "is_configured": true, 00:10:41.832 "data_offset": 2048, 00:10:41.832 "data_size": 63488 00:10:41.832 }, 00:10:41.832 { 00:10:41.832 "name": "pt3", 00:10:41.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.832 "is_configured": true, 00:10:41.832 "data_offset": 2048, 00:10:41.832 "data_size": 63488 00:10:41.832 } 00:10:41.832 ] 00:10:41.832 }' 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.832 17:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.092 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.092 [2024-11-20 17:45:09.255543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6d492d02-4f4a-4959-890e-3c3b8b9e035a '!=' 6d492d02-4f4a-4959-890e-3c3b8b9e035a ']' 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69046 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69046 ']' 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69046 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69046 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69046' 00:10:42.350 killing process with pid 69046 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69046 00:10:42.350 [2024-11-20 17:45:09.312995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.350 [2024-11-20 17:45:09.313151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.350 17:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69046 00:10:42.350 [2024-11-20 17:45:09.313236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.350 [2024-11-20 17:45:09.313252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:42.608 [2024-11-20 17:45:09.702368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.987 ************************************ 00:10:43.987 END TEST raid_superblock_test 00:10:43.987 ************************************ 00:10:43.987 17:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:43.987 00:10:43.987 real 0m8.323s 00:10:43.987 user 0m12.695s 00:10:43.987 sys 0m1.539s 00:10:43.987 17:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.987 17:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.247 17:45:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:44.247 17:45:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.247 17:45:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.247 17:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.247 ************************************ 00:10:44.247 START TEST raid_read_error_test 00:10:44.247 ************************************ 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c5iDoVacKj 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69497 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69497 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69497 ']' 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.247 17:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.247 [2024-11-20 17:45:11.316303] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:44.247 [2024-11-20 17:45:11.316554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69497 ] 00:10:44.507 [2024-11-20 17:45:11.486318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.507 [2024-11-20 17:45:11.644112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.767 [2024-11-20 17:45:11.914527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.767 [2024-11-20 17:45:11.914597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.027 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.027 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:45.027 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.027 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:45.027 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.027 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.287 BaseBdev1_malloc 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.287 true 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.287 [2024-11-20 17:45:12.236994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:45.287 [2024-11-20 17:45:12.237184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.287 [2024-11-20 17:45:12.237243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:45.287 [2024-11-20 17:45:12.237286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.287 [2024-11-20 17:45:12.240118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.287 [2024-11-20 17:45:12.240202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:45.287 BaseBdev1 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.287 BaseBdev2_malloc 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:45.287 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 true 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 [2024-11-20 17:45:12.317216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:45.288 [2024-11-20 17:45:12.317294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.288 [2024-11-20 17:45:12.317316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:45.288 [2024-11-20 17:45:12.317330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.288 [2024-11-20 17:45:12.320196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.288 [2024-11-20 17:45:12.320237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:45.288 BaseBdev2 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 BaseBdev3_malloc 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 true 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 [2024-11-20 17:45:12.411287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:45.288 [2024-11-20 17:45:12.411362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.288 [2024-11-20 17:45:12.411384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:45.288 [2024-11-20 17:45:12.411396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.288 [2024-11-20 17:45:12.414205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.288 [2024-11-20 17:45:12.414246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:45.288 BaseBdev3 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 [2024-11-20 17:45:12.423353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.288 [2024-11-20 17:45:12.425807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.288 [2024-11-20 17:45:12.425902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.288 [2024-11-20 17:45:12.426173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.288 [2024-11-20 17:45:12.426189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.288 [2024-11-20 17:45:12.426478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:45.288 [2024-11-20 17:45:12.426796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.288 [2024-11-20 17:45:12.426816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:45.288 [2024-11-20 17:45:12.427035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.288 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.547 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.547 "name": "raid_bdev1", 00:10:45.547 "uuid": "39d6578a-f8d4-4760-97fa-702ab41bc5c6", 00:10:45.547 "strip_size_kb": 0, 00:10:45.547 "state": "online", 00:10:45.547 "raid_level": "raid1", 00:10:45.547 "superblock": true, 00:10:45.547 "num_base_bdevs": 3, 00:10:45.547 "num_base_bdevs_discovered": 3, 00:10:45.547 "num_base_bdevs_operational": 3, 00:10:45.547 "base_bdevs_list": [ 00:10:45.547 { 00:10:45.547 "name": "BaseBdev1", 00:10:45.547 "uuid": "940bbcf4-b7f2-5997-9f1d-a3631fb2914d", 00:10:45.547 "is_configured": true, 00:10:45.547 "data_offset": 2048, 00:10:45.547 "data_size": 63488 00:10:45.547 }, 00:10:45.547 { 00:10:45.547 "name": "BaseBdev2", 00:10:45.547 "uuid": "5d668d49-052d-51e6-b8bf-54b774523dcb", 00:10:45.547 "is_configured": true, 00:10:45.547 "data_offset": 2048, 00:10:45.547 "data_size": 63488 00:10:45.547 }, 00:10:45.547 { 00:10:45.547 "name": "BaseBdev3", 00:10:45.547 "uuid": "e61af482-76b9-5acc-af6e-bc15726d0bf1", 00:10:45.547 "is_configured": true, 00:10:45.547 "data_offset": 2048, 00:10:45.547 "data_size": 63488 00:10:45.547 } 00:10:45.547 ] 00:10:45.547 }' 00:10:45.547 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.547 17:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.807 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:45.807 17:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:46.065 [2024-11-20 17:45:13.004135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.002 "name": "raid_bdev1", 00:10:47.002 "uuid": "39d6578a-f8d4-4760-97fa-702ab41bc5c6", 00:10:47.002 "strip_size_kb": 0, 00:10:47.002 "state": "online", 00:10:47.002 "raid_level": "raid1", 00:10:47.002 "superblock": true, 00:10:47.002 "num_base_bdevs": 3, 00:10:47.002 "num_base_bdevs_discovered": 3, 00:10:47.002 "num_base_bdevs_operational": 3, 00:10:47.002 "base_bdevs_list": [ 00:10:47.002 { 00:10:47.002 "name": "BaseBdev1", 00:10:47.002 "uuid": "940bbcf4-b7f2-5997-9f1d-a3631fb2914d", 00:10:47.002 "is_configured": true, 00:10:47.002 "data_offset": 2048, 00:10:47.002 "data_size": 63488 00:10:47.002 }, 00:10:47.002 { 00:10:47.002 "name": "BaseBdev2", 00:10:47.002 "uuid": "5d668d49-052d-51e6-b8bf-54b774523dcb", 00:10:47.002 "is_configured": true, 00:10:47.002 "data_offset": 2048, 00:10:47.002 "data_size": 63488 00:10:47.002 }, 00:10:47.002 { 00:10:47.002 "name": "BaseBdev3", 00:10:47.002 "uuid": "e61af482-76b9-5acc-af6e-bc15726d0bf1", 00:10:47.002 "is_configured": true, 00:10:47.002 "data_offset": 2048, 00:10:47.002 "data_size": 63488 00:10:47.002 } 00:10:47.002 ] 00:10:47.002 }' 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.002 17:45:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.262 [2024-11-20 17:45:14.382139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.262 [2024-11-20 17:45:14.382191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.262 [2024-11-20 17:45:14.385690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.262 [2024-11-20 17:45:14.385791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.262 [2024-11-20 17:45:14.386051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.262 [2024-11-20 17:45:14.386112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:47.262 { 00:10:47.262 "results": [ 00:10:47.262 { 00:10:47.262 "job": "raid_bdev1", 00:10:47.262 "core_mask": "0x1", 00:10:47.262 "workload": "randrw", 00:10:47.262 "percentage": 50, 00:10:47.262 "status": "finished", 00:10:47.262 "queue_depth": 1, 00:10:47.262 "io_size": 131072, 00:10:47.262 "runtime": 1.378291, 00:10:47.262 "iops": 8932.076027486213, 00:10:47.262 "mibps": 1116.5095034357767, 00:10:47.262 "io_failed": 0, 00:10:47.262 "io_timeout": 0, 00:10:47.262 "avg_latency_us": 108.77702399139619, 00:10:47.262 "min_latency_us": 26.606113537117903, 00:10:47.262 "max_latency_us": 1931.7379912663755 00:10:47.262 } 00:10:47.262 ], 00:10:47.262 "core_count": 1 00:10:47.262 } 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69497 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69497 ']' 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69497 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69497 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69497' 00:10:47.262 killing process with pid 69497 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69497 00:10:47.262 [2024-11-20 17:45:14.436531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.262 17:45:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69497 00:10:47.832 [2024-11-20 17:45:14.741069] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c5iDoVacKj 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:49.213 00:10:49.213 real 0m5.051s 00:10:49.213 user 0m5.835s 00:10:49.213 sys 0m0.722s 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.213 17:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.213 ************************************ 00:10:49.213 END TEST raid_read_error_test 00:10:49.213 ************************************ 00:10:49.213 17:45:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:49.213 17:45:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:49.213 17:45:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.213 17:45:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.213 ************************************ 00:10:49.213 START TEST raid_write_error_test 00:10:49.213 ************************************ 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xZd6icC2wk 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69647 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69647 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69647 ']' 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.213 17:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 [2024-11-20 17:45:16.420769] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:49.473 [2024-11-20 17:45:16.420897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69647 ] 00:10:49.473 [2024-11-20 17:45:16.594175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.732 [2024-11-20 17:45:16.750968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.992 [2024-11-20 17:45:17.049830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.992 [2024-11-20 17:45:17.050055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 BaseBdev1_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 true 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 [2024-11-20 17:45:17.327167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:50.252 [2024-11-20 17:45:17.327334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.252 [2024-11-20 17:45:17.327384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:50.252 [2024-11-20 17:45:17.327442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.252 [2024-11-20 17:45:17.330069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.252 [2024-11-20 17:45:17.330116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:50.252 BaseBdev1 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 BaseBdev2_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 true 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 [2024-11-20 17:45:17.401936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:50.252 [2024-11-20 17:45:17.402082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.252 [2024-11-20 17:45:17.402104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:50.252 [2024-11-20 17:45:17.402118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.252 [2024-11-20 17:45:17.404604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.252 [2024-11-20 17:45:17.404654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:50.252 BaseBdev2 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.513 BaseBdev3_malloc 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.513 true 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.513 [2024-11-20 17:45:17.488818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:50.513 [2024-11-20 17:45:17.488892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.513 [2024-11-20 17:45:17.488914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:50.513 [2024-11-20 17:45:17.488928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.513 [2024-11-20 17:45:17.491560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.513 [2024-11-20 17:45:17.491609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:50.513 BaseBdev3 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.513 [2024-11-20 17:45:17.500889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.513 [2024-11-20 17:45:17.502963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.513 [2024-11-20 17:45:17.503150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.513 [2024-11-20 17:45:17.503402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.513 [2024-11-20 17:45:17.503420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.513 [2024-11-20 17:45:17.503672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:50.513 [2024-11-20 17:45:17.503850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.513 [2024-11-20 17:45:17.503863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:50.513 [2024-11-20 17:45:17.504107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.513 "name": "raid_bdev1", 00:10:50.513 "uuid": "0fb1b262-0796-4de1-9056-c26a26176dee", 00:10:50.513 "strip_size_kb": 0, 00:10:50.513 "state": "online", 00:10:50.513 "raid_level": "raid1", 00:10:50.513 "superblock": true, 00:10:50.513 "num_base_bdevs": 3, 00:10:50.513 "num_base_bdevs_discovered": 3, 00:10:50.513 "num_base_bdevs_operational": 3, 00:10:50.513 "base_bdevs_list": [ 00:10:50.513 { 00:10:50.513 "name": "BaseBdev1", 00:10:50.513 "uuid": "f201fc82-4ecf-511b-89f8-4469d0992f69", 00:10:50.513 "is_configured": true, 00:10:50.513 "data_offset": 2048, 00:10:50.513 "data_size": 63488 00:10:50.513 }, 00:10:50.513 { 00:10:50.513 "name": "BaseBdev2", 00:10:50.513 "uuid": "3c26e3b0-477c-5f12-ace0-ee7be4c83723", 00:10:50.513 "is_configured": true, 00:10:50.513 "data_offset": 2048, 00:10:50.513 "data_size": 63488 00:10:50.513 }, 00:10:50.513 { 00:10:50.513 "name": "BaseBdev3", 00:10:50.513 "uuid": "bc2defd0-e59c-5beb-87c3-c0013b880fbe", 00:10:50.513 "is_configured": true, 00:10:50.513 "data_offset": 2048, 00:10:50.513 "data_size": 63488 00:10:50.513 } 00:10:50.513 ] 00:10:50.513 }' 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.513 17:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.103 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:51.103 17:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:51.103 [2024-11-20 17:45:18.085914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:52.043 17:45:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:52.043 17:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.043 17:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.043 [2024-11-20 17:45:18.997421] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:52.043 [2024-11-20 17:45:18.997598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.043 [2024-11-20 17:45:18.997878] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.043 "name": "raid_bdev1", 00:10:52.043 "uuid": "0fb1b262-0796-4de1-9056-c26a26176dee", 00:10:52.043 "strip_size_kb": 0, 00:10:52.043 "state": "online", 00:10:52.043 "raid_level": "raid1", 00:10:52.043 "superblock": true, 00:10:52.043 "num_base_bdevs": 3, 00:10:52.043 "num_base_bdevs_discovered": 2, 00:10:52.043 "num_base_bdevs_operational": 2, 00:10:52.043 "base_bdevs_list": [ 00:10:52.043 { 00:10:52.043 "name": null, 00:10:52.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.043 "is_configured": false, 00:10:52.043 "data_offset": 0, 00:10:52.043 "data_size": 63488 00:10:52.043 }, 00:10:52.043 { 00:10:52.043 "name": "BaseBdev2", 00:10:52.043 "uuid": "3c26e3b0-477c-5f12-ace0-ee7be4c83723", 00:10:52.043 "is_configured": true, 00:10:52.043 "data_offset": 2048, 00:10:52.043 "data_size": 63488 00:10:52.043 }, 00:10:52.043 { 00:10:52.043 "name": "BaseBdev3", 00:10:52.043 "uuid": "bc2defd0-e59c-5beb-87c3-c0013b880fbe", 00:10:52.043 "is_configured": true, 00:10:52.043 "data_offset": 2048, 00:10:52.043 "data_size": 63488 00:10:52.043 } 00:10:52.043 ] 00:10:52.043 }' 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.043 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.303 [2024-11-20 17:45:19.432516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.303 [2024-11-20 17:45:19.432680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.303 [2024-11-20 17:45:19.435792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.303 [2024-11-20 17:45:19.435932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.303 [2024-11-20 17:45:19.436093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.303 [2024-11-20 17:45:19.436168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.303 { 00:10:52.303 "results": [ 00:10:52.303 { 00:10:52.303 "job": "raid_bdev1", 00:10:52.303 "core_mask": "0x1", 00:10:52.303 "workload": "randrw", 00:10:52.303 "percentage": 50, 00:10:52.303 "status": "finished", 00:10:52.303 "queue_depth": 1, 00:10:52.303 "io_size": 131072, 00:10:52.303 "runtime": 1.346793, 00:10:52.303 "iops": 9880.508734452882, 00:10:52.303 "mibps": 1235.0635918066102, 00:10:52.303 "io_failed": 0, 00:10:52.303 "io_timeout": 0, 00:10:52.303 "avg_latency_us": 97.75407040258222, 00:10:52.303 "min_latency_us": 26.382532751091702, 00:10:52.303 "max_latency_us": 1652.709170305677 00:10:52.303 } 00:10:52.303 ], 00:10:52.303 "core_count": 1 00:10:52.303 } 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69647 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69647 ']' 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69647 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69647 00:10:52.303 killing process with pid 69647 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69647' 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69647 00:10:52.303 [2024-11-20 17:45:19.470241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.303 17:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69647 00:10:52.871 [2024-11-20 17:45:19.755889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xZd6icC2wk 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:54.251 00:10:54.251 real 0m4.921s 00:10:54.251 user 0m5.602s 00:10:54.251 sys 0m0.728s 00:10:54.251 ************************************ 00:10:54.251 END TEST raid_write_error_test 00:10:54.251 ************************************ 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.251 17:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.251 17:45:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:54.251 17:45:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:54.251 17:45:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:54.251 17:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.251 17:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.251 17:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.251 ************************************ 00:10:54.251 START TEST raid_state_function_test 00:10:54.251 ************************************ 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:54.251 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:54.251 Process raid pid: 69794 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69794 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69794' 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69794 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69794 ']' 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.252 17:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.252 [2024-11-20 17:45:21.404712] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:54.252 [2024-11-20 17:45:21.404949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.511 [2024-11-20 17:45:21.581570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.771 [2024-11-20 17:45:21.704768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.771 [2024-11-20 17:45:21.926615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.771 [2024-11-20 17:45:21.926653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.340 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.340 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.340 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.340 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.340 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.340 [2024-11-20 17:45:22.291197] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.340 [2024-11-20 17:45:22.291288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.340 [2024-11-20 17:45:22.291301] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.340 [2024-11-20 17:45:22.291313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.340 [2024-11-20 17:45:22.291320] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.340 [2024-11-20 17:45:22.291331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.341 [2024-11-20 17:45:22.291338] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.341 [2024-11-20 17:45:22.291349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.341 "name": "Existed_Raid", 00:10:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.341 "strip_size_kb": 64, 00:10:55.341 "state": "configuring", 00:10:55.341 "raid_level": "raid0", 00:10:55.341 "superblock": false, 00:10:55.341 "num_base_bdevs": 4, 00:10:55.341 "num_base_bdevs_discovered": 0, 00:10:55.341 "num_base_bdevs_operational": 4, 00:10:55.341 "base_bdevs_list": [ 00:10:55.341 { 00:10:55.341 "name": "BaseBdev1", 00:10:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.341 "is_configured": false, 00:10:55.341 "data_offset": 0, 00:10:55.341 "data_size": 0 00:10:55.341 }, 00:10:55.341 { 00:10:55.341 "name": "BaseBdev2", 00:10:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.341 "is_configured": false, 00:10:55.341 "data_offset": 0, 00:10:55.341 "data_size": 0 00:10:55.341 }, 00:10:55.341 { 00:10:55.341 "name": "BaseBdev3", 00:10:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.341 "is_configured": false, 00:10:55.341 "data_offset": 0, 00:10:55.341 "data_size": 0 00:10:55.341 }, 00:10:55.341 { 00:10:55.341 "name": "BaseBdev4", 00:10:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.341 "is_configured": false, 00:10:55.341 "data_offset": 0, 00:10:55.341 "data_size": 0 00:10:55.341 } 00:10:55.341 ] 00:10:55.341 }' 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.341 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.600 [2024-11-20 17:45:22.754396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.600 [2024-11-20 17:45:22.754460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.600 [2024-11-20 17:45:22.766354] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.600 [2024-11-20 17:45:22.766408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.600 [2024-11-20 17:45:22.766419] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.600 [2024-11-20 17:45:22.766430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.600 [2024-11-20 17:45:22.766437] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.600 [2024-11-20 17:45:22.766447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.600 [2024-11-20 17:45:22.766455] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.600 [2024-11-20 17:45:22.766465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.600 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.859 [2024-11-20 17:45:22.826863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.859 BaseBdev1 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.859 [ 00:10:55.859 { 00:10:55.859 "name": "BaseBdev1", 00:10:55.859 "aliases": [ 00:10:55.859 "ae59ce19-28bc-4f18-828f-128c217e6d33" 00:10:55.859 ], 00:10:55.859 "product_name": "Malloc disk", 00:10:55.859 "block_size": 512, 00:10:55.859 "num_blocks": 65536, 00:10:55.859 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:55.859 "assigned_rate_limits": { 00:10:55.859 "rw_ios_per_sec": 0, 00:10:55.859 "rw_mbytes_per_sec": 0, 00:10:55.859 "r_mbytes_per_sec": 0, 00:10:55.859 "w_mbytes_per_sec": 0 00:10:55.859 }, 00:10:55.859 "claimed": true, 00:10:55.859 "claim_type": "exclusive_write", 00:10:55.859 "zoned": false, 00:10:55.859 "supported_io_types": { 00:10:55.859 "read": true, 00:10:55.859 "write": true, 00:10:55.859 "unmap": true, 00:10:55.859 "flush": true, 00:10:55.859 "reset": true, 00:10:55.859 "nvme_admin": false, 00:10:55.859 "nvme_io": false, 00:10:55.859 "nvme_io_md": false, 00:10:55.859 "write_zeroes": true, 00:10:55.859 "zcopy": true, 00:10:55.859 "get_zone_info": false, 00:10:55.859 "zone_management": false, 00:10:55.859 "zone_append": false, 00:10:55.859 "compare": false, 00:10:55.859 "compare_and_write": false, 00:10:55.859 "abort": true, 00:10:55.859 "seek_hole": false, 00:10:55.859 "seek_data": false, 00:10:55.859 "copy": true, 00:10:55.859 "nvme_iov_md": false 00:10:55.859 }, 00:10:55.859 "memory_domains": [ 00:10:55.859 { 00:10:55.859 "dma_device_id": "system", 00:10:55.859 "dma_device_type": 1 00:10:55.859 }, 00:10:55.859 { 00:10:55.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.859 "dma_device_type": 2 00:10:55.859 } 00:10:55.859 ], 00:10:55.859 "driver_specific": {} 00:10:55.859 } 00:10:55.859 ] 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.859 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.859 "name": "Existed_Raid", 00:10:55.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.859 "strip_size_kb": 64, 00:10:55.859 "state": "configuring", 00:10:55.859 "raid_level": "raid0", 00:10:55.859 "superblock": false, 00:10:55.859 "num_base_bdevs": 4, 00:10:55.859 "num_base_bdevs_discovered": 1, 00:10:55.859 "num_base_bdevs_operational": 4, 00:10:55.859 "base_bdevs_list": [ 00:10:55.859 { 00:10:55.860 "name": "BaseBdev1", 00:10:55.860 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:55.860 "is_configured": true, 00:10:55.860 "data_offset": 0, 00:10:55.860 "data_size": 65536 00:10:55.860 }, 00:10:55.860 { 00:10:55.860 "name": "BaseBdev2", 00:10:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.860 "is_configured": false, 00:10:55.860 "data_offset": 0, 00:10:55.860 "data_size": 0 00:10:55.860 }, 00:10:55.860 { 00:10:55.860 "name": "BaseBdev3", 00:10:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.860 "is_configured": false, 00:10:55.860 "data_offset": 0, 00:10:55.860 "data_size": 0 00:10:55.860 }, 00:10:55.860 { 00:10:55.860 "name": "BaseBdev4", 00:10:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.860 "is_configured": false, 00:10:55.860 "data_offset": 0, 00:10:55.860 "data_size": 0 00:10:55.860 } 00:10:55.860 ] 00:10:55.860 }' 00:10:55.860 17:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.860 17:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 [2024-11-20 17:45:23.282203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.119 [2024-11-20 17:45:23.282287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.119 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.119 [2024-11-20 17:45:23.294221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.388 [2024-11-20 17:45:23.296474] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.388 [2024-11-20 17:45:23.296517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.388 [2024-11-20 17:45:23.296528] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.388 [2024-11-20 17:45:23.296539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.388 [2024-11-20 17:45:23.296546] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.388 [2024-11-20 17:45:23.296555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.388 "name": "Existed_Raid", 00:10:56.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.388 "strip_size_kb": 64, 00:10:56.388 "state": "configuring", 00:10:56.388 "raid_level": "raid0", 00:10:56.388 "superblock": false, 00:10:56.388 "num_base_bdevs": 4, 00:10:56.388 "num_base_bdevs_discovered": 1, 00:10:56.388 "num_base_bdevs_operational": 4, 00:10:56.388 "base_bdevs_list": [ 00:10:56.388 { 00:10:56.388 "name": "BaseBdev1", 00:10:56.388 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:56.388 "is_configured": true, 00:10:56.388 "data_offset": 0, 00:10:56.388 "data_size": 65536 00:10:56.388 }, 00:10:56.388 { 00:10:56.388 "name": "BaseBdev2", 00:10:56.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.388 "is_configured": false, 00:10:56.388 "data_offset": 0, 00:10:56.388 "data_size": 0 00:10:56.388 }, 00:10:56.388 { 00:10:56.388 "name": "BaseBdev3", 00:10:56.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.388 "is_configured": false, 00:10:56.388 "data_offset": 0, 00:10:56.388 "data_size": 0 00:10:56.388 }, 00:10:56.388 { 00:10:56.388 "name": "BaseBdev4", 00:10:56.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.388 "is_configured": false, 00:10:56.388 "data_offset": 0, 00:10:56.388 "data_size": 0 00:10:56.388 } 00:10:56.388 ] 00:10:56.388 }' 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.388 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.646 [2024-11-20 17:45:23.814007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.646 BaseBdev2 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.646 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.906 [ 00:10:56.906 { 00:10:56.906 "name": "BaseBdev2", 00:10:56.906 "aliases": [ 00:10:56.906 "cda60910-4f6e-464e-9408-3e2d50eabc04" 00:10:56.906 ], 00:10:56.906 "product_name": "Malloc disk", 00:10:56.906 "block_size": 512, 00:10:56.906 "num_blocks": 65536, 00:10:56.906 "uuid": "cda60910-4f6e-464e-9408-3e2d50eabc04", 00:10:56.906 "assigned_rate_limits": { 00:10:56.906 "rw_ios_per_sec": 0, 00:10:56.906 "rw_mbytes_per_sec": 0, 00:10:56.906 "r_mbytes_per_sec": 0, 00:10:56.906 "w_mbytes_per_sec": 0 00:10:56.906 }, 00:10:56.906 "claimed": true, 00:10:56.906 "claim_type": "exclusive_write", 00:10:56.906 "zoned": false, 00:10:56.906 "supported_io_types": { 00:10:56.906 "read": true, 00:10:56.906 "write": true, 00:10:56.906 "unmap": true, 00:10:56.906 "flush": true, 00:10:56.906 "reset": true, 00:10:56.906 "nvme_admin": false, 00:10:56.906 "nvme_io": false, 00:10:56.906 "nvme_io_md": false, 00:10:56.906 "write_zeroes": true, 00:10:56.906 "zcopy": true, 00:10:56.906 "get_zone_info": false, 00:10:56.906 "zone_management": false, 00:10:56.906 "zone_append": false, 00:10:56.906 "compare": false, 00:10:56.906 "compare_and_write": false, 00:10:56.906 "abort": true, 00:10:56.906 "seek_hole": false, 00:10:56.906 "seek_data": false, 00:10:56.906 "copy": true, 00:10:56.906 "nvme_iov_md": false 00:10:56.906 }, 00:10:56.906 "memory_domains": [ 00:10:56.906 { 00:10:56.906 "dma_device_id": "system", 00:10:56.906 "dma_device_type": 1 00:10:56.906 }, 00:10:56.906 { 00:10:56.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.906 "dma_device_type": 2 00:10:56.906 } 00:10:56.906 ], 00:10:56.906 "driver_specific": {} 00:10:56.906 } 00:10:56.906 ] 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.906 "name": "Existed_Raid", 00:10:56.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.906 "strip_size_kb": 64, 00:10:56.906 "state": "configuring", 00:10:56.906 "raid_level": "raid0", 00:10:56.906 "superblock": false, 00:10:56.906 "num_base_bdevs": 4, 00:10:56.906 "num_base_bdevs_discovered": 2, 00:10:56.906 "num_base_bdevs_operational": 4, 00:10:56.906 "base_bdevs_list": [ 00:10:56.906 { 00:10:56.906 "name": "BaseBdev1", 00:10:56.906 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:56.906 "is_configured": true, 00:10:56.906 "data_offset": 0, 00:10:56.906 "data_size": 65536 00:10:56.906 }, 00:10:56.906 { 00:10:56.906 "name": "BaseBdev2", 00:10:56.906 "uuid": "cda60910-4f6e-464e-9408-3e2d50eabc04", 00:10:56.906 "is_configured": true, 00:10:56.906 "data_offset": 0, 00:10:56.906 "data_size": 65536 00:10:56.906 }, 00:10:56.906 { 00:10:56.906 "name": "BaseBdev3", 00:10:56.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.906 "is_configured": false, 00:10:56.906 "data_offset": 0, 00:10:56.906 "data_size": 0 00:10:56.906 }, 00:10:56.906 { 00:10:56.906 "name": "BaseBdev4", 00:10:56.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.906 "is_configured": false, 00:10:56.906 "data_offset": 0, 00:10:56.906 "data_size": 0 00:10:56.906 } 00:10:56.906 ] 00:10:56.906 }' 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.906 17:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.166 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.166 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.166 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.426 [2024-11-20 17:45:24.346806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.426 BaseBdev3 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.426 [ 00:10:57.426 { 00:10:57.426 "name": "BaseBdev3", 00:10:57.426 "aliases": [ 00:10:57.426 "16f2582e-c20a-4b03-857e-6cce8444b8e1" 00:10:57.426 ], 00:10:57.426 "product_name": "Malloc disk", 00:10:57.426 "block_size": 512, 00:10:57.426 "num_blocks": 65536, 00:10:57.426 "uuid": "16f2582e-c20a-4b03-857e-6cce8444b8e1", 00:10:57.426 "assigned_rate_limits": { 00:10:57.426 "rw_ios_per_sec": 0, 00:10:57.426 "rw_mbytes_per_sec": 0, 00:10:57.426 "r_mbytes_per_sec": 0, 00:10:57.426 "w_mbytes_per_sec": 0 00:10:57.426 }, 00:10:57.426 "claimed": true, 00:10:57.426 "claim_type": "exclusive_write", 00:10:57.426 "zoned": false, 00:10:57.426 "supported_io_types": { 00:10:57.426 "read": true, 00:10:57.426 "write": true, 00:10:57.426 "unmap": true, 00:10:57.426 "flush": true, 00:10:57.426 "reset": true, 00:10:57.426 "nvme_admin": false, 00:10:57.426 "nvme_io": false, 00:10:57.426 "nvme_io_md": false, 00:10:57.426 "write_zeroes": true, 00:10:57.426 "zcopy": true, 00:10:57.426 "get_zone_info": false, 00:10:57.426 "zone_management": false, 00:10:57.426 "zone_append": false, 00:10:57.426 "compare": false, 00:10:57.426 "compare_and_write": false, 00:10:57.426 "abort": true, 00:10:57.426 "seek_hole": false, 00:10:57.426 "seek_data": false, 00:10:57.426 "copy": true, 00:10:57.426 "nvme_iov_md": false 00:10:57.426 }, 00:10:57.426 "memory_domains": [ 00:10:57.426 { 00:10:57.426 "dma_device_id": "system", 00:10:57.426 "dma_device_type": 1 00:10:57.426 }, 00:10:57.426 { 00:10:57.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.426 "dma_device_type": 2 00:10:57.426 } 00:10:57.426 ], 00:10:57.426 "driver_specific": {} 00:10:57.426 } 00:10:57.426 ] 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.426 "name": "Existed_Raid", 00:10:57.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.426 "strip_size_kb": 64, 00:10:57.426 "state": "configuring", 00:10:57.426 "raid_level": "raid0", 00:10:57.426 "superblock": false, 00:10:57.426 "num_base_bdevs": 4, 00:10:57.426 "num_base_bdevs_discovered": 3, 00:10:57.426 "num_base_bdevs_operational": 4, 00:10:57.426 "base_bdevs_list": [ 00:10:57.426 { 00:10:57.426 "name": "BaseBdev1", 00:10:57.426 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:57.426 "is_configured": true, 00:10:57.426 "data_offset": 0, 00:10:57.426 "data_size": 65536 00:10:57.426 }, 00:10:57.426 { 00:10:57.426 "name": "BaseBdev2", 00:10:57.426 "uuid": "cda60910-4f6e-464e-9408-3e2d50eabc04", 00:10:57.426 "is_configured": true, 00:10:57.426 "data_offset": 0, 00:10:57.426 "data_size": 65536 00:10:57.426 }, 00:10:57.426 { 00:10:57.426 "name": "BaseBdev3", 00:10:57.426 "uuid": "16f2582e-c20a-4b03-857e-6cce8444b8e1", 00:10:57.426 "is_configured": true, 00:10:57.426 "data_offset": 0, 00:10:57.426 "data_size": 65536 00:10:57.426 }, 00:10:57.426 { 00:10:57.426 "name": "BaseBdev4", 00:10:57.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.426 "is_configured": false, 00:10:57.426 "data_offset": 0, 00:10:57.426 "data_size": 0 00:10:57.426 } 00:10:57.426 ] 00:10:57.426 }' 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.426 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.684 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.684 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.684 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.942 [2024-11-20 17:45:24.884414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.943 [2024-11-20 17:45:24.884499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:57.943 [2024-11-20 17:45:24.884512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:57.943 [2024-11-20 17:45:24.884898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.943 [2024-11-20 17:45:24.885129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:57.943 [2024-11-20 17:45:24.885155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:57.943 [2024-11-20 17:45:24.885494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.943 BaseBdev4 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.943 [ 00:10:57.943 { 00:10:57.943 "name": "BaseBdev4", 00:10:57.943 "aliases": [ 00:10:57.943 "ae51c4b5-54ca-4b0a-b0f9-2f7de69bfa30" 00:10:57.943 ], 00:10:57.943 "product_name": "Malloc disk", 00:10:57.943 "block_size": 512, 00:10:57.943 "num_blocks": 65536, 00:10:57.943 "uuid": "ae51c4b5-54ca-4b0a-b0f9-2f7de69bfa30", 00:10:57.943 "assigned_rate_limits": { 00:10:57.943 "rw_ios_per_sec": 0, 00:10:57.943 "rw_mbytes_per_sec": 0, 00:10:57.943 "r_mbytes_per_sec": 0, 00:10:57.943 "w_mbytes_per_sec": 0 00:10:57.943 }, 00:10:57.943 "claimed": true, 00:10:57.943 "claim_type": "exclusive_write", 00:10:57.943 "zoned": false, 00:10:57.943 "supported_io_types": { 00:10:57.943 "read": true, 00:10:57.943 "write": true, 00:10:57.943 "unmap": true, 00:10:57.943 "flush": true, 00:10:57.943 "reset": true, 00:10:57.943 "nvme_admin": false, 00:10:57.943 "nvme_io": false, 00:10:57.943 "nvme_io_md": false, 00:10:57.943 "write_zeroes": true, 00:10:57.943 "zcopy": true, 00:10:57.943 "get_zone_info": false, 00:10:57.943 "zone_management": false, 00:10:57.943 "zone_append": false, 00:10:57.943 "compare": false, 00:10:57.943 "compare_and_write": false, 00:10:57.943 "abort": true, 00:10:57.943 "seek_hole": false, 00:10:57.943 "seek_data": false, 00:10:57.943 "copy": true, 00:10:57.943 "nvme_iov_md": false 00:10:57.943 }, 00:10:57.943 "memory_domains": [ 00:10:57.943 { 00:10:57.943 "dma_device_id": "system", 00:10:57.943 "dma_device_type": 1 00:10:57.943 }, 00:10:57.943 { 00:10:57.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.943 "dma_device_type": 2 00:10:57.943 } 00:10:57.943 ], 00:10:57.943 "driver_specific": {} 00:10:57.943 } 00:10:57.943 ] 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.943 "name": "Existed_Raid", 00:10:57.943 "uuid": "fe13a886-907b-4fdc-ad96-b5dbe8c64faa", 00:10:57.943 "strip_size_kb": 64, 00:10:57.943 "state": "online", 00:10:57.943 "raid_level": "raid0", 00:10:57.943 "superblock": false, 00:10:57.943 "num_base_bdevs": 4, 00:10:57.943 "num_base_bdevs_discovered": 4, 00:10:57.943 "num_base_bdevs_operational": 4, 00:10:57.943 "base_bdevs_list": [ 00:10:57.943 { 00:10:57.943 "name": "BaseBdev1", 00:10:57.943 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:57.943 "is_configured": true, 00:10:57.943 "data_offset": 0, 00:10:57.943 "data_size": 65536 00:10:57.943 }, 00:10:57.943 { 00:10:57.943 "name": "BaseBdev2", 00:10:57.943 "uuid": "cda60910-4f6e-464e-9408-3e2d50eabc04", 00:10:57.943 "is_configured": true, 00:10:57.943 "data_offset": 0, 00:10:57.943 "data_size": 65536 00:10:57.943 }, 00:10:57.943 { 00:10:57.943 "name": "BaseBdev3", 00:10:57.943 "uuid": "16f2582e-c20a-4b03-857e-6cce8444b8e1", 00:10:57.943 "is_configured": true, 00:10:57.943 "data_offset": 0, 00:10:57.943 "data_size": 65536 00:10:57.943 }, 00:10:57.943 { 00:10:57.943 "name": "BaseBdev4", 00:10:57.943 "uuid": "ae51c4b5-54ca-4b0a-b0f9-2f7de69bfa30", 00:10:57.943 "is_configured": true, 00:10:57.943 "data_offset": 0, 00:10:57.943 "data_size": 65536 00:10:57.943 } 00:10:57.943 ] 00:10:57.943 }' 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.943 17:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.202 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.202 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.202 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.202 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.202 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.202 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.460 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.460 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.460 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.461 [2024-11-20 17:45:25.384136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.461 "name": "Existed_Raid", 00:10:58.461 "aliases": [ 00:10:58.461 "fe13a886-907b-4fdc-ad96-b5dbe8c64faa" 00:10:58.461 ], 00:10:58.461 "product_name": "Raid Volume", 00:10:58.461 "block_size": 512, 00:10:58.461 "num_blocks": 262144, 00:10:58.461 "uuid": "fe13a886-907b-4fdc-ad96-b5dbe8c64faa", 00:10:58.461 "assigned_rate_limits": { 00:10:58.461 "rw_ios_per_sec": 0, 00:10:58.461 "rw_mbytes_per_sec": 0, 00:10:58.461 "r_mbytes_per_sec": 0, 00:10:58.461 "w_mbytes_per_sec": 0 00:10:58.461 }, 00:10:58.461 "claimed": false, 00:10:58.461 "zoned": false, 00:10:58.461 "supported_io_types": { 00:10:58.461 "read": true, 00:10:58.461 "write": true, 00:10:58.461 "unmap": true, 00:10:58.461 "flush": true, 00:10:58.461 "reset": true, 00:10:58.461 "nvme_admin": false, 00:10:58.461 "nvme_io": false, 00:10:58.461 "nvme_io_md": false, 00:10:58.461 "write_zeroes": true, 00:10:58.461 "zcopy": false, 00:10:58.461 "get_zone_info": false, 00:10:58.461 "zone_management": false, 00:10:58.461 "zone_append": false, 00:10:58.461 "compare": false, 00:10:58.461 "compare_and_write": false, 00:10:58.461 "abort": false, 00:10:58.461 "seek_hole": false, 00:10:58.461 "seek_data": false, 00:10:58.461 "copy": false, 00:10:58.461 "nvme_iov_md": false 00:10:58.461 }, 00:10:58.461 "memory_domains": [ 00:10:58.461 { 00:10:58.461 "dma_device_id": "system", 00:10:58.461 "dma_device_type": 1 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.461 "dma_device_type": 2 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "system", 00:10:58.461 "dma_device_type": 1 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.461 "dma_device_type": 2 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "system", 00:10:58.461 "dma_device_type": 1 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.461 "dma_device_type": 2 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "system", 00:10:58.461 "dma_device_type": 1 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.461 "dma_device_type": 2 00:10:58.461 } 00:10:58.461 ], 00:10:58.461 "driver_specific": { 00:10:58.461 "raid": { 00:10:58.461 "uuid": "fe13a886-907b-4fdc-ad96-b5dbe8c64faa", 00:10:58.461 "strip_size_kb": 64, 00:10:58.461 "state": "online", 00:10:58.461 "raid_level": "raid0", 00:10:58.461 "superblock": false, 00:10:58.461 "num_base_bdevs": 4, 00:10:58.461 "num_base_bdevs_discovered": 4, 00:10:58.461 "num_base_bdevs_operational": 4, 00:10:58.461 "base_bdevs_list": [ 00:10:58.461 { 00:10:58.461 "name": "BaseBdev1", 00:10:58.461 "uuid": "ae59ce19-28bc-4f18-828f-128c217e6d33", 00:10:58.461 "is_configured": true, 00:10:58.461 "data_offset": 0, 00:10:58.461 "data_size": 65536 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "name": "BaseBdev2", 00:10:58.461 "uuid": "cda60910-4f6e-464e-9408-3e2d50eabc04", 00:10:58.461 "is_configured": true, 00:10:58.461 "data_offset": 0, 00:10:58.461 "data_size": 65536 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "name": "BaseBdev3", 00:10:58.461 "uuid": "16f2582e-c20a-4b03-857e-6cce8444b8e1", 00:10:58.461 "is_configured": true, 00:10:58.461 "data_offset": 0, 00:10:58.461 "data_size": 65536 00:10:58.461 }, 00:10:58.461 { 00:10:58.461 "name": "BaseBdev4", 00:10:58.461 "uuid": "ae51c4b5-54ca-4b0a-b0f9-2f7de69bfa30", 00:10:58.461 "is_configured": true, 00:10:58.461 "data_offset": 0, 00:10:58.461 "data_size": 65536 00:10:58.461 } 00:10:58.461 ] 00:10:58.461 } 00:10:58.461 } 00:10:58.461 }' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:58.461 BaseBdev2 00:10:58.461 BaseBdev3 00:10:58.461 BaseBdev4' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.461 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.720 [2024-11-20 17:45:25.715229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.720 [2024-11-20 17:45:25.715285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.720 [2024-11-20 17:45:25.715354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.720 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.721 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.721 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.721 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.721 "name": "Existed_Raid", 00:10:58.721 "uuid": "fe13a886-907b-4fdc-ad96-b5dbe8c64faa", 00:10:58.721 "strip_size_kb": 64, 00:10:58.721 "state": "offline", 00:10:58.721 "raid_level": "raid0", 00:10:58.721 "superblock": false, 00:10:58.721 "num_base_bdevs": 4, 00:10:58.721 "num_base_bdevs_discovered": 3, 00:10:58.721 "num_base_bdevs_operational": 3, 00:10:58.721 "base_bdevs_list": [ 00:10:58.721 { 00:10:58.721 "name": null, 00:10:58.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.721 "is_configured": false, 00:10:58.721 "data_offset": 0, 00:10:58.721 "data_size": 65536 00:10:58.721 }, 00:10:58.721 { 00:10:58.721 "name": "BaseBdev2", 00:10:58.721 "uuid": "cda60910-4f6e-464e-9408-3e2d50eabc04", 00:10:58.721 "is_configured": true, 00:10:58.721 "data_offset": 0, 00:10:58.721 "data_size": 65536 00:10:58.721 }, 00:10:58.721 { 00:10:58.721 "name": "BaseBdev3", 00:10:58.721 "uuid": "16f2582e-c20a-4b03-857e-6cce8444b8e1", 00:10:58.721 "is_configured": true, 00:10:58.721 "data_offset": 0, 00:10:58.721 "data_size": 65536 00:10:58.721 }, 00:10:58.721 { 00:10:58.721 "name": "BaseBdev4", 00:10:58.721 "uuid": "ae51c4b5-54ca-4b0a-b0f9-2f7de69bfa30", 00:10:58.721 "is_configured": true, 00:10:58.721 "data_offset": 0, 00:10:58.721 "data_size": 65536 00:10:58.721 } 00:10:58.721 ] 00:10:58.721 }' 00:10:58.721 17:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.721 17:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.287 [2024-11-20 17:45:26.341633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.287 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 [2024-11-20 17:45:26.516203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.547 [2024-11-20 17:45:26.696771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:59.547 [2024-11-20 17:45:26.696850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:59.805 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.805 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.805 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.805 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.805 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.806 BaseBdev2 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.806 [ 00:10:59.806 { 00:10:59.806 "name": "BaseBdev2", 00:10:59.806 "aliases": [ 00:10:59.806 "4131011a-931c-47a4-bfcd-161d3c4374ef" 00:10:59.806 ], 00:10:59.806 "product_name": "Malloc disk", 00:10:59.806 "block_size": 512, 00:10:59.806 "num_blocks": 65536, 00:10:59.806 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:10:59.806 "assigned_rate_limits": { 00:10:59.806 "rw_ios_per_sec": 0, 00:10:59.806 "rw_mbytes_per_sec": 0, 00:10:59.806 "r_mbytes_per_sec": 0, 00:10:59.806 "w_mbytes_per_sec": 0 00:10:59.806 }, 00:10:59.806 "claimed": false, 00:10:59.806 "zoned": false, 00:10:59.806 "supported_io_types": { 00:10:59.806 "read": true, 00:10:59.806 "write": true, 00:10:59.806 "unmap": true, 00:10:59.806 "flush": true, 00:10:59.806 "reset": true, 00:10:59.806 "nvme_admin": false, 00:10:59.806 "nvme_io": false, 00:10:59.806 "nvme_io_md": false, 00:10:59.806 "write_zeroes": true, 00:10:59.806 "zcopy": true, 00:10:59.806 "get_zone_info": false, 00:10:59.806 "zone_management": false, 00:10:59.806 "zone_append": false, 00:10:59.806 "compare": false, 00:10:59.806 "compare_and_write": false, 00:10:59.806 "abort": true, 00:10:59.806 "seek_hole": false, 00:10:59.806 "seek_data": false, 00:10:59.806 "copy": true, 00:10:59.806 "nvme_iov_md": false 00:10:59.806 }, 00:10:59.806 "memory_domains": [ 00:10:59.806 { 00:10:59.806 "dma_device_id": "system", 00:10:59.806 "dma_device_type": 1 00:10:59.806 }, 00:10:59.806 { 00:10:59.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.806 "dma_device_type": 2 00:10:59.806 } 00:10:59.806 ], 00:10:59.806 "driver_specific": {} 00:10:59.806 } 00:10:59.806 ] 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.806 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.070 BaseBdev3 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.070 17:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.070 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.071 [ 00:11:00.071 { 00:11:00.071 "name": "BaseBdev3", 00:11:00.071 "aliases": [ 00:11:00.071 "c9126897-e842-41d7-b0e7-f07268a4989c" 00:11:00.071 ], 00:11:00.071 "product_name": "Malloc disk", 00:11:00.071 "block_size": 512, 00:11:00.071 "num_blocks": 65536, 00:11:00.071 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:00.071 "assigned_rate_limits": { 00:11:00.071 "rw_ios_per_sec": 0, 00:11:00.071 "rw_mbytes_per_sec": 0, 00:11:00.071 "r_mbytes_per_sec": 0, 00:11:00.071 "w_mbytes_per_sec": 0 00:11:00.071 }, 00:11:00.071 "claimed": false, 00:11:00.071 "zoned": false, 00:11:00.071 "supported_io_types": { 00:11:00.071 "read": true, 00:11:00.071 "write": true, 00:11:00.071 "unmap": true, 00:11:00.071 "flush": true, 00:11:00.071 "reset": true, 00:11:00.071 "nvme_admin": false, 00:11:00.071 "nvme_io": false, 00:11:00.071 "nvme_io_md": false, 00:11:00.071 "write_zeroes": true, 00:11:00.071 "zcopy": true, 00:11:00.071 "get_zone_info": false, 00:11:00.071 "zone_management": false, 00:11:00.071 "zone_append": false, 00:11:00.071 "compare": false, 00:11:00.071 "compare_and_write": false, 00:11:00.071 "abort": true, 00:11:00.071 "seek_hole": false, 00:11:00.071 "seek_data": false, 00:11:00.071 "copy": true, 00:11:00.071 "nvme_iov_md": false 00:11:00.071 }, 00:11:00.071 "memory_domains": [ 00:11:00.071 { 00:11:00.071 "dma_device_id": "system", 00:11:00.071 "dma_device_type": 1 00:11:00.071 }, 00:11:00.071 { 00:11:00.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.071 "dma_device_type": 2 00:11:00.071 } 00:11:00.071 ], 00:11:00.071 "driver_specific": {} 00:11:00.071 } 00:11:00.071 ] 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.071 BaseBdev4 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.071 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.071 [ 00:11:00.071 { 00:11:00.071 "name": "BaseBdev4", 00:11:00.071 "aliases": [ 00:11:00.071 "38ed17b1-72f3-48eb-b479-49c4edddb955" 00:11:00.071 ], 00:11:00.071 "product_name": "Malloc disk", 00:11:00.071 "block_size": 512, 00:11:00.071 "num_blocks": 65536, 00:11:00.071 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:00.071 "assigned_rate_limits": { 00:11:00.071 "rw_ios_per_sec": 0, 00:11:00.071 "rw_mbytes_per_sec": 0, 00:11:00.071 "r_mbytes_per_sec": 0, 00:11:00.071 "w_mbytes_per_sec": 0 00:11:00.071 }, 00:11:00.071 "claimed": false, 00:11:00.071 "zoned": false, 00:11:00.071 "supported_io_types": { 00:11:00.071 "read": true, 00:11:00.071 "write": true, 00:11:00.071 "unmap": true, 00:11:00.071 "flush": true, 00:11:00.071 "reset": true, 00:11:00.071 "nvme_admin": false, 00:11:00.071 "nvme_io": false, 00:11:00.071 "nvme_io_md": false, 00:11:00.071 "write_zeroes": true, 00:11:00.071 "zcopy": true, 00:11:00.071 "get_zone_info": false, 00:11:00.071 "zone_management": false, 00:11:00.071 "zone_append": false, 00:11:00.071 "compare": false, 00:11:00.071 "compare_and_write": false, 00:11:00.071 "abort": true, 00:11:00.071 "seek_hole": false, 00:11:00.071 "seek_data": false, 00:11:00.071 "copy": true, 00:11:00.071 "nvme_iov_md": false 00:11:00.071 }, 00:11:00.071 "memory_domains": [ 00:11:00.071 { 00:11:00.071 "dma_device_id": "system", 00:11:00.071 "dma_device_type": 1 00:11:00.071 }, 00:11:00.071 { 00:11:00.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.072 "dma_device_type": 2 00:11:00.072 } 00:11:00.072 ], 00:11:00.072 "driver_specific": {} 00:11:00.072 } 00:11:00.072 ] 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.072 [2024-11-20 17:45:27.131447] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.072 [2024-11-20 17:45:27.131509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.072 [2024-11-20 17:45:27.131539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.072 [2024-11-20 17:45:27.134041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.072 [2024-11-20 17:45:27.134108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.072 "name": "Existed_Raid", 00:11:00.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.072 "strip_size_kb": 64, 00:11:00.072 "state": "configuring", 00:11:00.072 "raid_level": "raid0", 00:11:00.072 "superblock": false, 00:11:00.072 "num_base_bdevs": 4, 00:11:00.072 "num_base_bdevs_discovered": 3, 00:11:00.072 "num_base_bdevs_operational": 4, 00:11:00.072 "base_bdevs_list": [ 00:11:00.072 { 00:11:00.072 "name": "BaseBdev1", 00:11:00.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.072 "is_configured": false, 00:11:00.072 "data_offset": 0, 00:11:00.072 "data_size": 0 00:11:00.072 }, 00:11:00.072 { 00:11:00.072 "name": "BaseBdev2", 00:11:00.072 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:00.072 "is_configured": true, 00:11:00.072 "data_offset": 0, 00:11:00.072 "data_size": 65536 00:11:00.072 }, 00:11:00.072 { 00:11:00.072 "name": "BaseBdev3", 00:11:00.072 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:00.072 "is_configured": true, 00:11:00.072 "data_offset": 0, 00:11:00.072 "data_size": 65536 00:11:00.072 }, 00:11:00.072 { 00:11:00.072 "name": "BaseBdev4", 00:11:00.072 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:00.072 "is_configured": true, 00:11:00.072 "data_offset": 0, 00:11:00.072 "data_size": 65536 00:11:00.072 } 00:11:00.072 ] 00:11:00.072 }' 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.072 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.654 [2024-11-20 17:45:27.594747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.654 "name": "Existed_Raid", 00:11:00.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.654 "strip_size_kb": 64, 00:11:00.654 "state": "configuring", 00:11:00.654 "raid_level": "raid0", 00:11:00.654 "superblock": false, 00:11:00.654 "num_base_bdevs": 4, 00:11:00.654 "num_base_bdevs_discovered": 2, 00:11:00.654 "num_base_bdevs_operational": 4, 00:11:00.654 "base_bdevs_list": [ 00:11:00.654 { 00:11:00.654 "name": "BaseBdev1", 00:11:00.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.654 "is_configured": false, 00:11:00.654 "data_offset": 0, 00:11:00.654 "data_size": 0 00:11:00.654 }, 00:11:00.654 { 00:11:00.654 "name": null, 00:11:00.654 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:00.654 "is_configured": false, 00:11:00.654 "data_offset": 0, 00:11:00.654 "data_size": 65536 00:11:00.654 }, 00:11:00.654 { 00:11:00.654 "name": "BaseBdev3", 00:11:00.654 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:00.654 "is_configured": true, 00:11:00.654 "data_offset": 0, 00:11:00.654 "data_size": 65536 00:11:00.654 }, 00:11:00.654 { 00:11:00.654 "name": "BaseBdev4", 00:11:00.654 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:00.654 "is_configured": true, 00:11:00.654 "data_offset": 0, 00:11:00.654 "data_size": 65536 00:11:00.654 } 00:11:00.654 ] 00:11:00.654 }' 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.654 17:45:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.913 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.172 [2024-11-20 17:45:28.128420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.172 BaseBdev1 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.172 [ 00:11:01.172 { 00:11:01.172 "name": "BaseBdev1", 00:11:01.172 "aliases": [ 00:11:01.172 "641adb76-a013-40d1-a99e-b93f2ab7e980" 00:11:01.172 ], 00:11:01.172 "product_name": "Malloc disk", 00:11:01.172 "block_size": 512, 00:11:01.172 "num_blocks": 65536, 00:11:01.172 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:01.172 "assigned_rate_limits": { 00:11:01.172 "rw_ios_per_sec": 0, 00:11:01.172 "rw_mbytes_per_sec": 0, 00:11:01.172 "r_mbytes_per_sec": 0, 00:11:01.172 "w_mbytes_per_sec": 0 00:11:01.172 }, 00:11:01.172 "claimed": true, 00:11:01.172 "claim_type": "exclusive_write", 00:11:01.172 "zoned": false, 00:11:01.172 "supported_io_types": { 00:11:01.172 "read": true, 00:11:01.172 "write": true, 00:11:01.172 "unmap": true, 00:11:01.172 "flush": true, 00:11:01.172 "reset": true, 00:11:01.172 "nvme_admin": false, 00:11:01.172 "nvme_io": false, 00:11:01.172 "nvme_io_md": false, 00:11:01.172 "write_zeroes": true, 00:11:01.172 "zcopy": true, 00:11:01.172 "get_zone_info": false, 00:11:01.172 "zone_management": false, 00:11:01.172 "zone_append": false, 00:11:01.172 "compare": false, 00:11:01.172 "compare_and_write": false, 00:11:01.172 "abort": true, 00:11:01.172 "seek_hole": false, 00:11:01.172 "seek_data": false, 00:11:01.172 "copy": true, 00:11:01.172 "nvme_iov_md": false 00:11:01.172 }, 00:11:01.172 "memory_domains": [ 00:11:01.172 { 00:11:01.172 "dma_device_id": "system", 00:11:01.172 "dma_device_type": 1 00:11:01.172 }, 00:11:01.172 { 00:11:01.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.172 "dma_device_type": 2 00:11:01.172 } 00:11:01.172 ], 00:11:01.172 "driver_specific": {} 00:11:01.172 } 00:11:01.172 ] 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.172 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.172 "name": "Existed_Raid", 00:11:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.172 "strip_size_kb": 64, 00:11:01.172 "state": "configuring", 00:11:01.172 "raid_level": "raid0", 00:11:01.172 "superblock": false, 00:11:01.173 "num_base_bdevs": 4, 00:11:01.173 "num_base_bdevs_discovered": 3, 00:11:01.173 "num_base_bdevs_operational": 4, 00:11:01.173 "base_bdevs_list": [ 00:11:01.173 { 00:11:01.173 "name": "BaseBdev1", 00:11:01.173 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:01.173 "is_configured": true, 00:11:01.173 "data_offset": 0, 00:11:01.173 "data_size": 65536 00:11:01.173 }, 00:11:01.173 { 00:11:01.173 "name": null, 00:11:01.173 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:01.173 "is_configured": false, 00:11:01.173 "data_offset": 0, 00:11:01.173 "data_size": 65536 00:11:01.173 }, 00:11:01.173 { 00:11:01.173 "name": "BaseBdev3", 00:11:01.173 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:01.173 "is_configured": true, 00:11:01.173 "data_offset": 0, 00:11:01.173 "data_size": 65536 00:11:01.173 }, 00:11:01.173 { 00:11:01.173 "name": "BaseBdev4", 00:11:01.173 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:01.173 "is_configured": true, 00:11:01.173 "data_offset": 0, 00:11:01.173 "data_size": 65536 00:11:01.173 } 00:11:01.173 ] 00:11:01.173 }' 00:11:01.173 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.173 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.741 [2024-11-20 17:45:28.675617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.741 "name": "Existed_Raid", 00:11:01.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.741 "strip_size_kb": 64, 00:11:01.741 "state": "configuring", 00:11:01.741 "raid_level": "raid0", 00:11:01.741 "superblock": false, 00:11:01.741 "num_base_bdevs": 4, 00:11:01.741 "num_base_bdevs_discovered": 2, 00:11:01.741 "num_base_bdevs_operational": 4, 00:11:01.741 "base_bdevs_list": [ 00:11:01.741 { 00:11:01.741 "name": "BaseBdev1", 00:11:01.741 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:01.741 "is_configured": true, 00:11:01.741 "data_offset": 0, 00:11:01.741 "data_size": 65536 00:11:01.741 }, 00:11:01.741 { 00:11:01.741 "name": null, 00:11:01.741 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:01.741 "is_configured": false, 00:11:01.741 "data_offset": 0, 00:11:01.741 "data_size": 65536 00:11:01.741 }, 00:11:01.741 { 00:11:01.741 "name": null, 00:11:01.741 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:01.741 "is_configured": false, 00:11:01.741 "data_offset": 0, 00:11:01.741 "data_size": 65536 00:11:01.741 }, 00:11:01.741 { 00:11:01.741 "name": "BaseBdev4", 00:11:01.741 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:01.741 "is_configured": true, 00:11:01.741 "data_offset": 0, 00:11:01.741 "data_size": 65536 00:11:01.741 } 00:11:01.741 ] 00:11:01.741 }' 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.741 17:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.000 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.000 [2024-11-20 17:45:29.102885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.001 "name": "Existed_Raid", 00:11:02.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.001 "strip_size_kb": 64, 00:11:02.001 "state": "configuring", 00:11:02.001 "raid_level": "raid0", 00:11:02.001 "superblock": false, 00:11:02.001 "num_base_bdevs": 4, 00:11:02.001 "num_base_bdevs_discovered": 3, 00:11:02.001 "num_base_bdevs_operational": 4, 00:11:02.001 "base_bdevs_list": [ 00:11:02.001 { 00:11:02.001 "name": "BaseBdev1", 00:11:02.001 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:02.001 "is_configured": true, 00:11:02.001 "data_offset": 0, 00:11:02.001 "data_size": 65536 00:11:02.001 }, 00:11:02.001 { 00:11:02.001 "name": null, 00:11:02.001 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:02.001 "is_configured": false, 00:11:02.001 "data_offset": 0, 00:11:02.001 "data_size": 65536 00:11:02.001 }, 00:11:02.001 { 00:11:02.001 "name": "BaseBdev3", 00:11:02.001 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:02.001 "is_configured": true, 00:11:02.001 "data_offset": 0, 00:11:02.001 "data_size": 65536 00:11:02.001 }, 00:11:02.001 { 00:11:02.001 "name": "BaseBdev4", 00:11:02.001 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:02.001 "is_configured": true, 00:11:02.001 "data_offset": 0, 00:11:02.001 "data_size": 65536 00:11:02.001 } 00:11:02.001 ] 00:11:02.001 }' 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.001 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.568 [2024-11-20 17:45:29.594163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.568 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.826 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.827 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.827 "name": "Existed_Raid", 00:11:02.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.827 "strip_size_kb": 64, 00:11:02.827 "state": "configuring", 00:11:02.827 "raid_level": "raid0", 00:11:02.827 "superblock": false, 00:11:02.827 "num_base_bdevs": 4, 00:11:02.827 "num_base_bdevs_discovered": 2, 00:11:02.827 "num_base_bdevs_operational": 4, 00:11:02.827 "base_bdevs_list": [ 00:11:02.827 { 00:11:02.827 "name": null, 00:11:02.827 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:02.827 "is_configured": false, 00:11:02.827 "data_offset": 0, 00:11:02.827 "data_size": 65536 00:11:02.827 }, 00:11:02.827 { 00:11:02.827 "name": null, 00:11:02.827 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:02.827 "is_configured": false, 00:11:02.827 "data_offset": 0, 00:11:02.827 "data_size": 65536 00:11:02.827 }, 00:11:02.827 { 00:11:02.827 "name": "BaseBdev3", 00:11:02.827 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:02.827 "is_configured": true, 00:11:02.827 "data_offset": 0, 00:11:02.827 "data_size": 65536 00:11:02.827 }, 00:11:02.827 { 00:11:02.827 "name": "BaseBdev4", 00:11:02.827 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:02.827 "is_configured": true, 00:11:02.827 "data_offset": 0, 00:11:02.827 "data_size": 65536 00:11:02.827 } 00:11:02.827 ] 00:11:02.827 }' 00:11:02.827 17:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.827 17:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.086 [2024-11-20 17:45:30.226008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.086 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.344 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.344 "name": "Existed_Raid", 00:11:03.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.344 "strip_size_kb": 64, 00:11:03.344 "state": "configuring", 00:11:03.344 "raid_level": "raid0", 00:11:03.344 "superblock": false, 00:11:03.344 "num_base_bdevs": 4, 00:11:03.344 "num_base_bdevs_discovered": 3, 00:11:03.344 "num_base_bdevs_operational": 4, 00:11:03.344 "base_bdevs_list": [ 00:11:03.344 { 00:11:03.344 "name": null, 00:11:03.344 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:03.345 "is_configured": false, 00:11:03.345 "data_offset": 0, 00:11:03.345 "data_size": 65536 00:11:03.345 }, 00:11:03.345 { 00:11:03.345 "name": "BaseBdev2", 00:11:03.345 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:03.345 "is_configured": true, 00:11:03.345 "data_offset": 0, 00:11:03.345 "data_size": 65536 00:11:03.345 }, 00:11:03.345 { 00:11:03.345 "name": "BaseBdev3", 00:11:03.345 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:03.345 "is_configured": true, 00:11:03.345 "data_offset": 0, 00:11:03.345 "data_size": 65536 00:11:03.345 }, 00:11:03.345 { 00:11:03.345 "name": "BaseBdev4", 00:11:03.345 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:03.345 "is_configured": true, 00:11:03.345 "data_offset": 0, 00:11:03.345 "data_size": 65536 00:11:03.345 } 00:11:03.345 ] 00:11:03.345 }' 00:11:03.345 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.345 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.603 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 641adb76-a013-40d1-a99e-b93f2ab7e980 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.862 [2024-11-20 17:45:30.850354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.862 [2024-11-20 17:45:30.850428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.862 [2024-11-20 17:45:30.850438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:03.862 [2024-11-20 17:45:30.850776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:03.862 [2024-11-20 17:45:30.850956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.862 [2024-11-20 17:45:30.850975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:03.862 [2024-11-20 17:45:30.851307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.862 NewBaseBdev 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.862 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.863 [ 00:11:03.863 { 00:11:03.863 "name": "NewBaseBdev", 00:11:03.863 "aliases": [ 00:11:03.863 "641adb76-a013-40d1-a99e-b93f2ab7e980" 00:11:03.863 ], 00:11:03.863 "product_name": "Malloc disk", 00:11:03.863 "block_size": 512, 00:11:03.863 "num_blocks": 65536, 00:11:03.863 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:03.863 "assigned_rate_limits": { 00:11:03.863 "rw_ios_per_sec": 0, 00:11:03.863 "rw_mbytes_per_sec": 0, 00:11:03.863 "r_mbytes_per_sec": 0, 00:11:03.863 "w_mbytes_per_sec": 0 00:11:03.863 }, 00:11:03.863 "claimed": true, 00:11:03.863 "claim_type": "exclusive_write", 00:11:03.863 "zoned": false, 00:11:03.863 "supported_io_types": { 00:11:03.863 "read": true, 00:11:03.863 "write": true, 00:11:03.863 "unmap": true, 00:11:03.863 "flush": true, 00:11:03.863 "reset": true, 00:11:03.863 "nvme_admin": false, 00:11:03.863 "nvme_io": false, 00:11:03.863 "nvme_io_md": false, 00:11:03.863 "write_zeroes": true, 00:11:03.863 "zcopy": true, 00:11:03.863 "get_zone_info": false, 00:11:03.863 "zone_management": false, 00:11:03.863 "zone_append": false, 00:11:03.863 "compare": false, 00:11:03.863 "compare_and_write": false, 00:11:03.863 "abort": true, 00:11:03.863 "seek_hole": false, 00:11:03.863 "seek_data": false, 00:11:03.863 "copy": true, 00:11:03.863 "nvme_iov_md": false 00:11:03.863 }, 00:11:03.863 "memory_domains": [ 00:11:03.863 { 00:11:03.863 "dma_device_id": "system", 00:11:03.863 "dma_device_type": 1 00:11:03.863 }, 00:11:03.863 { 00:11:03.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.863 "dma_device_type": 2 00:11:03.863 } 00:11:03.863 ], 00:11:03.863 "driver_specific": {} 00:11:03.863 } 00:11:03.863 ] 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.863 "name": "Existed_Raid", 00:11:03.863 "uuid": "684dfdc0-0dc3-4130-9728-a70fec3a3fbb", 00:11:03.863 "strip_size_kb": 64, 00:11:03.863 "state": "online", 00:11:03.863 "raid_level": "raid0", 00:11:03.863 "superblock": false, 00:11:03.863 "num_base_bdevs": 4, 00:11:03.863 "num_base_bdevs_discovered": 4, 00:11:03.863 "num_base_bdevs_operational": 4, 00:11:03.863 "base_bdevs_list": [ 00:11:03.863 { 00:11:03.863 "name": "NewBaseBdev", 00:11:03.863 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:03.863 "is_configured": true, 00:11:03.863 "data_offset": 0, 00:11:03.863 "data_size": 65536 00:11:03.863 }, 00:11:03.863 { 00:11:03.863 "name": "BaseBdev2", 00:11:03.863 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:03.863 "is_configured": true, 00:11:03.863 "data_offset": 0, 00:11:03.863 "data_size": 65536 00:11:03.863 }, 00:11:03.863 { 00:11:03.863 "name": "BaseBdev3", 00:11:03.863 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:03.863 "is_configured": true, 00:11:03.863 "data_offset": 0, 00:11:03.863 "data_size": 65536 00:11:03.863 }, 00:11:03.863 { 00:11:03.863 "name": "BaseBdev4", 00:11:03.863 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:03.863 "is_configured": true, 00:11:03.863 "data_offset": 0, 00:11:03.863 "data_size": 65536 00:11:03.863 } 00:11:03.863 ] 00:11:03.863 }' 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.863 17:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.121 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.121 [2024-11-20 17:45:31.294208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.380 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.380 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.380 "name": "Existed_Raid", 00:11:04.380 "aliases": [ 00:11:04.380 "684dfdc0-0dc3-4130-9728-a70fec3a3fbb" 00:11:04.380 ], 00:11:04.380 "product_name": "Raid Volume", 00:11:04.380 "block_size": 512, 00:11:04.380 "num_blocks": 262144, 00:11:04.380 "uuid": "684dfdc0-0dc3-4130-9728-a70fec3a3fbb", 00:11:04.380 "assigned_rate_limits": { 00:11:04.380 "rw_ios_per_sec": 0, 00:11:04.380 "rw_mbytes_per_sec": 0, 00:11:04.380 "r_mbytes_per_sec": 0, 00:11:04.380 "w_mbytes_per_sec": 0 00:11:04.380 }, 00:11:04.380 "claimed": false, 00:11:04.380 "zoned": false, 00:11:04.380 "supported_io_types": { 00:11:04.380 "read": true, 00:11:04.380 "write": true, 00:11:04.380 "unmap": true, 00:11:04.380 "flush": true, 00:11:04.380 "reset": true, 00:11:04.380 "nvme_admin": false, 00:11:04.380 "nvme_io": false, 00:11:04.380 "nvme_io_md": false, 00:11:04.380 "write_zeroes": true, 00:11:04.380 "zcopy": false, 00:11:04.380 "get_zone_info": false, 00:11:04.380 "zone_management": false, 00:11:04.380 "zone_append": false, 00:11:04.380 "compare": false, 00:11:04.380 "compare_and_write": false, 00:11:04.380 "abort": false, 00:11:04.380 "seek_hole": false, 00:11:04.380 "seek_data": false, 00:11:04.380 "copy": false, 00:11:04.380 "nvme_iov_md": false 00:11:04.380 }, 00:11:04.380 "memory_domains": [ 00:11:04.380 { 00:11:04.380 "dma_device_id": "system", 00:11:04.380 "dma_device_type": 1 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.380 "dma_device_type": 2 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "system", 00:11:04.380 "dma_device_type": 1 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.380 "dma_device_type": 2 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "system", 00:11:04.380 "dma_device_type": 1 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.380 "dma_device_type": 2 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "system", 00:11:04.380 "dma_device_type": 1 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.380 "dma_device_type": 2 00:11:04.380 } 00:11:04.380 ], 00:11:04.380 "driver_specific": { 00:11:04.380 "raid": { 00:11:04.380 "uuid": "684dfdc0-0dc3-4130-9728-a70fec3a3fbb", 00:11:04.380 "strip_size_kb": 64, 00:11:04.380 "state": "online", 00:11:04.380 "raid_level": "raid0", 00:11:04.380 "superblock": false, 00:11:04.380 "num_base_bdevs": 4, 00:11:04.380 "num_base_bdevs_discovered": 4, 00:11:04.380 "num_base_bdevs_operational": 4, 00:11:04.380 "base_bdevs_list": [ 00:11:04.380 { 00:11:04.380 "name": "NewBaseBdev", 00:11:04.380 "uuid": "641adb76-a013-40d1-a99e-b93f2ab7e980", 00:11:04.380 "is_configured": true, 00:11:04.380 "data_offset": 0, 00:11:04.380 "data_size": 65536 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "name": "BaseBdev2", 00:11:04.380 "uuid": "4131011a-931c-47a4-bfcd-161d3c4374ef", 00:11:04.380 "is_configured": true, 00:11:04.380 "data_offset": 0, 00:11:04.380 "data_size": 65536 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "name": "BaseBdev3", 00:11:04.380 "uuid": "c9126897-e842-41d7-b0e7-f07268a4989c", 00:11:04.380 "is_configured": true, 00:11:04.380 "data_offset": 0, 00:11:04.380 "data_size": 65536 00:11:04.380 }, 00:11:04.380 { 00:11:04.380 "name": "BaseBdev4", 00:11:04.380 "uuid": "38ed17b1-72f3-48eb-b479-49c4edddb955", 00:11:04.380 "is_configured": true, 00:11:04.380 "data_offset": 0, 00:11:04.380 "data_size": 65536 00:11:04.380 } 00:11:04.380 ] 00:11:04.380 } 00:11:04.380 } 00:11:04.380 }' 00:11:04.380 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.380 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:04.380 BaseBdev2 00:11:04.380 BaseBdev3 00:11:04.380 BaseBdev4' 00:11:04.380 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.381 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.640 [2024-11-20 17:45:31.593216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.640 [2024-11-20 17:45:31.593289] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.640 [2024-11-20 17:45:31.593402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.640 [2024-11-20 17:45:31.593492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.640 [2024-11-20 17:45:31.593511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69794 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69794 ']' 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69794 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69794 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.640 killing process with pid 69794 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69794' 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69794 00:11:04.640 [2024-11-20 17:45:31.638590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.640 17:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69794 00:11:05.207 [2024-11-20 17:45:32.155445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:06.588 00:11:06.588 real 0m12.297s 00:11:06.588 user 0m19.175s 00:11:06.588 sys 0m2.078s 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.588 ************************************ 00:11:06.588 END TEST raid_state_function_test 00:11:06.588 ************************************ 00:11:06.588 17:45:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:06.588 17:45:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:06.588 17:45:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.588 17:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.588 ************************************ 00:11:06.588 START TEST raid_state_function_test_sb 00:11:06.588 ************************************ 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70471 00:11:06.588 Process raid pid: 70471 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70471' 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70471 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70471 ']' 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.588 17:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.848 [2024-11-20 17:45:33.762621] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:06.848 [2024-11-20 17:45:33.762741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.848 [2024-11-20 17:45:33.943431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.105 [2024-11-20 17:45:34.088577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.364 [2024-11-20 17:45:34.346157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.364 [2024-11-20 17:45:34.346216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.622 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.623 [2024-11-20 17:45:34.598770] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.623 [2024-11-20 17:45:34.598879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.623 [2024-11-20 17:45:34.598904] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.623 [2024-11-20 17:45:34.598923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.623 [2024-11-20 17:45:34.598934] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.623 [2024-11-20 17:45:34.598954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.623 [2024-11-20 17:45:34.598967] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:07.623 [2024-11-20 17:45:34.598986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.623 "name": "Existed_Raid", 00:11:07.623 "uuid": "9ddb335e-e1ec-4967-9a63-7b4745a370a6", 00:11:07.623 "strip_size_kb": 64, 00:11:07.623 "state": "configuring", 00:11:07.623 "raid_level": "raid0", 00:11:07.623 "superblock": true, 00:11:07.623 "num_base_bdevs": 4, 00:11:07.623 "num_base_bdevs_discovered": 0, 00:11:07.623 "num_base_bdevs_operational": 4, 00:11:07.623 "base_bdevs_list": [ 00:11:07.623 { 00:11:07.623 "name": "BaseBdev1", 00:11:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.623 "is_configured": false, 00:11:07.623 "data_offset": 0, 00:11:07.623 "data_size": 0 00:11:07.623 }, 00:11:07.623 { 00:11:07.623 "name": "BaseBdev2", 00:11:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.623 "is_configured": false, 00:11:07.623 "data_offset": 0, 00:11:07.623 "data_size": 0 00:11:07.623 }, 00:11:07.623 { 00:11:07.623 "name": "BaseBdev3", 00:11:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.623 "is_configured": false, 00:11:07.623 "data_offset": 0, 00:11:07.623 "data_size": 0 00:11:07.623 }, 00:11:07.623 { 00:11:07.623 "name": "BaseBdev4", 00:11:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.623 "is_configured": false, 00:11:07.623 "data_offset": 0, 00:11:07.623 "data_size": 0 00:11:07.623 } 00:11:07.623 ] 00:11:07.623 }' 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.623 17:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 [2024-11-20 17:45:35.029900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.882 [2024-11-20 17:45:35.029981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 [2024-11-20 17:45:35.041878] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.882 [2024-11-20 17:45:35.041939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.882 [2024-11-20 17:45:35.041951] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.882 [2024-11-20 17:45:35.041963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.882 [2024-11-20 17:45:35.041971] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.882 [2024-11-20 17:45:35.041983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.882 [2024-11-20 17:45:35.041990] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:07.882 [2024-11-20 17:45:35.042002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.141 [2024-11-20 17:45:35.104776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.141 BaseBdev1 00:11:08.141 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.141 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:08.141 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:08.141 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.141 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.142 [ 00:11:08.142 { 00:11:08.142 "name": "BaseBdev1", 00:11:08.142 "aliases": [ 00:11:08.142 "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e" 00:11:08.142 ], 00:11:08.142 "product_name": "Malloc disk", 00:11:08.142 "block_size": 512, 00:11:08.142 "num_blocks": 65536, 00:11:08.142 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:08.142 "assigned_rate_limits": { 00:11:08.142 "rw_ios_per_sec": 0, 00:11:08.142 "rw_mbytes_per_sec": 0, 00:11:08.142 "r_mbytes_per_sec": 0, 00:11:08.142 "w_mbytes_per_sec": 0 00:11:08.142 }, 00:11:08.142 "claimed": true, 00:11:08.142 "claim_type": "exclusive_write", 00:11:08.142 "zoned": false, 00:11:08.142 "supported_io_types": { 00:11:08.142 "read": true, 00:11:08.142 "write": true, 00:11:08.142 "unmap": true, 00:11:08.142 "flush": true, 00:11:08.142 "reset": true, 00:11:08.142 "nvme_admin": false, 00:11:08.142 "nvme_io": false, 00:11:08.142 "nvme_io_md": false, 00:11:08.142 "write_zeroes": true, 00:11:08.142 "zcopy": true, 00:11:08.142 "get_zone_info": false, 00:11:08.142 "zone_management": false, 00:11:08.142 "zone_append": false, 00:11:08.142 "compare": false, 00:11:08.142 "compare_and_write": false, 00:11:08.142 "abort": true, 00:11:08.142 "seek_hole": false, 00:11:08.142 "seek_data": false, 00:11:08.142 "copy": true, 00:11:08.142 "nvme_iov_md": false 00:11:08.142 }, 00:11:08.142 "memory_domains": [ 00:11:08.142 { 00:11:08.142 "dma_device_id": "system", 00:11:08.142 "dma_device_type": 1 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.142 "dma_device_type": 2 00:11:08.142 } 00:11:08.142 ], 00:11:08.142 "driver_specific": {} 00:11:08.142 } 00:11:08.142 ] 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.142 "name": "Existed_Raid", 00:11:08.142 "uuid": "712f4111-384a-4d19-807d-d5a0fa205e29", 00:11:08.142 "strip_size_kb": 64, 00:11:08.142 "state": "configuring", 00:11:08.142 "raid_level": "raid0", 00:11:08.142 "superblock": true, 00:11:08.142 "num_base_bdevs": 4, 00:11:08.142 "num_base_bdevs_discovered": 1, 00:11:08.142 "num_base_bdevs_operational": 4, 00:11:08.142 "base_bdevs_list": [ 00:11:08.142 { 00:11:08.142 "name": "BaseBdev1", 00:11:08.142 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:08.142 "is_configured": true, 00:11:08.142 "data_offset": 2048, 00:11:08.142 "data_size": 63488 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "name": "BaseBdev2", 00:11:08.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.142 "is_configured": false, 00:11:08.142 "data_offset": 0, 00:11:08.142 "data_size": 0 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "name": "BaseBdev3", 00:11:08.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.142 "is_configured": false, 00:11:08.142 "data_offset": 0, 00:11:08.142 "data_size": 0 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "name": "BaseBdev4", 00:11:08.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.142 "is_configured": false, 00:11:08.142 "data_offset": 0, 00:11:08.142 "data_size": 0 00:11:08.142 } 00:11:08.142 ] 00:11:08.142 }' 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.142 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.402 [2024-11-20 17:45:35.520180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.402 [2024-11-20 17:45:35.520270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.402 [2024-11-20 17:45:35.528241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.402 [2024-11-20 17:45:35.530452] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.402 [2024-11-20 17:45:35.530499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.402 [2024-11-20 17:45:35.530511] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.402 [2024-11-20 17:45:35.530522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.402 [2024-11-20 17:45:35.530529] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.402 [2024-11-20 17:45:35.530538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.402 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.403 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.661 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.662 "name": "Existed_Raid", 00:11:08.662 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:08.662 "strip_size_kb": 64, 00:11:08.662 "state": "configuring", 00:11:08.662 "raid_level": "raid0", 00:11:08.662 "superblock": true, 00:11:08.662 "num_base_bdevs": 4, 00:11:08.662 "num_base_bdevs_discovered": 1, 00:11:08.662 "num_base_bdevs_operational": 4, 00:11:08.662 "base_bdevs_list": [ 00:11:08.662 { 00:11:08.662 "name": "BaseBdev1", 00:11:08.662 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:08.662 "is_configured": true, 00:11:08.662 "data_offset": 2048, 00:11:08.662 "data_size": 63488 00:11:08.662 }, 00:11:08.662 { 00:11:08.662 "name": "BaseBdev2", 00:11:08.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.662 "is_configured": false, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 0 00:11:08.662 }, 00:11:08.662 { 00:11:08.662 "name": "BaseBdev3", 00:11:08.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.662 "is_configured": false, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 0 00:11:08.662 }, 00:11:08.662 { 00:11:08.662 "name": "BaseBdev4", 00:11:08.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.662 "is_configured": false, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 0 00:11:08.662 } 00:11:08.662 ] 00:11:08.662 }' 00:11:08.662 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.662 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 17:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.921 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.921 17:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [2024-11-20 17:45:36.020755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.921 BaseBdev2 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [ 00:11:08.921 { 00:11:08.921 "name": "BaseBdev2", 00:11:08.921 "aliases": [ 00:11:08.921 "5951b296-bdf5-4ce6-808a-bf03e54daa20" 00:11:08.921 ], 00:11:08.921 "product_name": "Malloc disk", 00:11:08.921 "block_size": 512, 00:11:08.921 "num_blocks": 65536, 00:11:08.921 "uuid": "5951b296-bdf5-4ce6-808a-bf03e54daa20", 00:11:08.921 "assigned_rate_limits": { 00:11:08.921 "rw_ios_per_sec": 0, 00:11:08.921 "rw_mbytes_per_sec": 0, 00:11:08.921 "r_mbytes_per_sec": 0, 00:11:08.921 "w_mbytes_per_sec": 0 00:11:08.921 }, 00:11:08.921 "claimed": true, 00:11:08.921 "claim_type": "exclusive_write", 00:11:08.921 "zoned": false, 00:11:08.921 "supported_io_types": { 00:11:08.921 "read": true, 00:11:08.921 "write": true, 00:11:08.921 "unmap": true, 00:11:08.921 "flush": true, 00:11:08.921 "reset": true, 00:11:08.921 "nvme_admin": false, 00:11:08.921 "nvme_io": false, 00:11:08.921 "nvme_io_md": false, 00:11:08.921 "write_zeroes": true, 00:11:08.921 "zcopy": true, 00:11:08.921 "get_zone_info": false, 00:11:08.921 "zone_management": false, 00:11:08.921 "zone_append": false, 00:11:08.921 "compare": false, 00:11:08.921 "compare_and_write": false, 00:11:08.921 "abort": true, 00:11:08.921 "seek_hole": false, 00:11:08.921 "seek_data": false, 00:11:08.921 "copy": true, 00:11:08.921 "nvme_iov_md": false 00:11:08.921 }, 00:11:08.921 "memory_domains": [ 00:11:08.921 { 00:11:08.921 "dma_device_id": "system", 00:11:08.921 "dma_device_type": 1 00:11:08.921 }, 00:11:08.921 { 00:11:08.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.921 "dma_device_type": 2 00:11:08.921 } 00:11:08.921 ], 00:11:08.921 "driver_specific": {} 00:11:08.921 } 00:11:08.921 ] 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.180 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.180 "name": "Existed_Raid", 00:11:09.180 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:09.180 "strip_size_kb": 64, 00:11:09.180 "state": "configuring", 00:11:09.180 "raid_level": "raid0", 00:11:09.180 "superblock": true, 00:11:09.180 "num_base_bdevs": 4, 00:11:09.180 "num_base_bdevs_discovered": 2, 00:11:09.180 "num_base_bdevs_operational": 4, 00:11:09.180 "base_bdevs_list": [ 00:11:09.180 { 00:11:09.180 "name": "BaseBdev1", 00:11:09.180 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:09.180 "is_configured": true, 00:11:09.180 "data_offset": 2048, 00:11:09.180 "data_size": 63488 00:11:09.180 }, 00:11:09.180 { 00:11:09.180 "name": "BaseBdev2", 00:11:09.180 "uuid": "5951b296-bdf5-4ce6-808a-bf03e54daa20", 00:11:09.181 "is_configured": true, 00:11:09.181 "data_offset": 2048, 00:11:09.181 "data_size": 63488 00:11:09.181 }, 00:11:09.181 { 00:11:09.181 "name": "BaseBdev3", 00:11:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.181 "is_configured": false, 00:11:09.181 "data_offset": 0, 00:11:09.181 "data_size": 0 00:11:09.181 }, 00:11:09.181 { 00:11:09.181 "name": "BaseBdev4", 00:11:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.181 "is_configured": false, 00:11:09.181 "data_offset": 0, 00:11:09.181 "data_size": 0 00:11:09.181 } 00:11:09.181 ] 00:11:09.181 }' 00:11:09.181 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.181 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 [2024-11-20 17:45:36.583761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.440 BaseBdev3 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.440 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.440 [ 00:11:09.440 { 00:11:09.440 "name": "BaseBdev3", 00:11:09.440 "aliases": [ 00:11:09.440 "86ee704e-34ed-4e57-a683-d25ba32d16bd" 00:11:09.440 ], 00:11:09.440 "product_name": "Malloc disk", 00:11:09.440 "block_size": 512, 00:11:09.440 "num_blocks": 65536, 00:11:09.440 "uuid": "86ee704e-34ed-4e57-a683-d25ba32d16bd", 00:11:09.440 "assigned_rate_limits": { 00:11:09.440 "rw_ios_per_sec": 0, 00:11:09.440 "rw_mbytes_per_sec": 0, 00:11:09.440 "r_mbytes_per_sec": 0, 00:11:09.440 "w_mbytes_per_sec": 0 00:11:09.440 }, 00:11:09.440 "claimed": true, 00:11:09.440 "claim_type": "exclusive_write", 00:11:09.440 "zoned": false, 00:11:09.440 "supported_io_types": { 00:11:09.440 "read": true, 00:11:09.440 "write": true, 00:11:09.440 "unmap": true, 00:11:09.440 "flush": true, 00:11:09.440 "reset": true, 00:11:09.440 "nvme_admin": false, 00:11:09.440 "nvme_io": false, 00:11:09.440 "nvme_io_md": false, 00:11:09.440 "write_zeroes": true, 00:11:09.440 "zcopy": true, 00:11:09.700 "get_zone_info": false, 00:11:09.700 "zone_management": false, 00:11:09.700 "zone_append": false, 00:11:09.700 "compare": false, 00:11:09.700 "compare_and_write": false, 00:11:09.700 "abort": true, 00:11:09.700 "seek_hole": false, 00:11:09.700 "seek_data": false, 00:11:09.700 "copy": true, 00:11:09.700 "nvme_iov_md": false 00:11:09.700 }, 00:11:09.700 "memory_domains": [ 00:11:09.700 { 00:11:09.700 "dma_device_id": "system", 00:11:09.700 "dma_device_type": 1 00:11:09.700 }, 00:11:09.700 { 00:11:09.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.700 "dma_device_type": 2 00:11:09.700 } 00:11:09.700 ], 00:11:09.700 "driver_specific": {} 00:11:09.700 } 00:11:09.700 ] 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.700 "name": "Existed_Raid", 00:11:09.700 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:09.700 "strip_size_kb": 64, 00:11:09.700 "state": "configuring", 00:11:09.700 "raid_level": "raid0", 00:11:09.700 "superblock": true, 00:11:09.700 "num_base_bdevs": 4, 00:11:09.700 "num_base_bdevs_discovered": 3, 00:11:09.700 "num_base_bdevs_operational": 4, 00:11:09.700 "base_bdevs_list": [ 00:11:09.700 { 00:11:09.700 "name": "BaseBdev1", 00:11:09.700 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:09.700 "is_configured": true, 00:11:09.700 "data_offset": 2048, 00:11:09.700 "data_size": 63488 00:11:09.700 }, 00:11:09.700 { 00:11:09.700 "name": "BaseBdev2", 00:11:09.700 "uuid": "5951b296-bdf5-4ce6-808a-bf03e54daa20", 00:11:09.700 "is_configured": true, 00:11:09.700 "data_offset": 2048, 00:11:09.700 "data_size": 63488 00:11:09.700 }, 00:11:09.700 { 00:11:09.700 "name": "BaseBdev3", 00:11:09.700 "uuid": "86ee704e-34ed-4e57-a683-d25ba32d16bd", 00:11:09.700 "is_configured": true, 00:11:09.700 "data_offset": 2048, 00:11:09.700 "data_size": 63488 00:11:09.700 }, 00:11:09.700 { 00:11:09.700 "name": "BaseBdev4", 00:11:09.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.700 "is_configured": false, 00:11:09.700 "data_offset": 0, 00:11:09.700 "data_size": 0 00:11:09.700 } 00:11:09.700 ] 00:11:09.700 }' 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.700 17:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.959 [2024-11-20 17:45:37.092331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.959 [2024-11-20 17:45:37.092806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.959 [2024-11-20 17:45:37.092870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.959 [2024-11-20 17:45:37.093258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:09.959 BaseBdev4 00:11:09.959 [2024-11-20 17:45:37.093493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.959 [2024-11-20 17:45:37.093510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:09.959 [2024-11-20 17:45:37.093679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.959 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.959 [ 00:11:09.959 { 00:11:09.959 "name": "BaseBdev4", 00:11:09.959 "aliases": [ 00:11:09.959 "1e6582c0-5b30-4a93-9046-83d2887beffd" 00:11:09.959 ], 00:11:09.959 "product_name": "Malloc disk", 00:11:09.959 "block_size": 512, 00:11:09.959 "num_blocks": 65536, 00:11:09.959 "uuid": "1e6582c0-5b30-4a93-9046-83d2887beffd", 00:11:09.959 "assigned_rate_limits": { 00:11:09.959 "rw_ios_per_sec": 0, 00:11:09.959 "rw_mbytes_per_sec": 0, 00:11:09.959 "r_mbytes_per_sec": 0, 00:11:09.959 "w_mbytes_per_sec": 0 00:11:09.959 }, 00:11:09.959 "claimed": true, 00:11:09.959 "claim_type": "exclusive_write", 00:11:09.959 "zoned": false, 00:11:09.959 "supported_io_types": { 00:11:09.959 "read": true, 00:11:09.959 "write": true, 00:11:09.959 "unmap": true, 00:11:09.959 "flush": true, 00:11:09.959 "reset": true, 00:11:09.959 "nvme_admin": false, 00:11:09.959 "nvme_io": false, 00:11:09.959 "nvme_io_md": false, 00:11:09.959 "write_zeroes": true, 00:11:09.959 "zcopy": true, 00:11:09.959 "get_zone_info": false, 00:11:09.959 "zone_management": false, 00:11:09.959 "zone_append": false, 00:11:09.959 "compare": false, 00:11:09.959 "compare_and_write": false, 00:11:09.959 "abort": true, 00:11:09.959 "seek_hole": false, 00:11:09.959 "seek_data": false, 00:11:09.959 "copy": true, 00:11:09.959 "nvme_iov_md": false 00:11:09.959 }, 00:11:09.959 "memory_domains": [ 00:11:09.959 { 00:11:09.959 "dma_device_id": "system", 00:11:09.959 "dma_device_type": 1 00:11:09.959 }, 00:11:09.959 { 00:11:09.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.959 "dma_device_type": 2 00:11:09.959 } 00:11:09.959 ], 00:11:09.959 "driver_specific": {} 00:11:09.959 } 00:11:09.959 ] 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.219 "name": "Existed_Raid", 00:11:10.219 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:10.219 "strip_size_kb": 64, 00:11:10.219 "state": "online", 00:11:10.219 "raid_level": "raid0", 00:11:10.219 "superblock": true, 00:11:10.219 "num_base_bdevs": 4, 00:11:10.219 "num_base_bdevs_discovered": 4, 00:11:10.219 "num_base_bdevs_operational": 4, 00:11:10.219 "base_bdevs_list": [ 00:11:10.219 { 00:11:10.219 "name": "BaseBdev1", 00:11:10.219 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:10.219 "is_configured": true, 00:11:10.219 "data_offset": 2048, 00:11:10.219 "data_size": 63488 00:11:10.219 }, 00:11:10.219 { 00:11:10.219 "name": "BaseBdev2", 00:11:10.219 "uuid": "5951b296-bdf5-4ce6-808a-bf03e54daa20", 00:11:10.219 "is_configured": true, 00:11:10.219 "data_offset": 2048, 00:11:10.219 "data_size": 63488 00:11:10.219 }, 00:11:10.219 { 00:11:10.219 "name": "BaseBdev3", 00:11:10.219 "uuid": "86ee704e-34ed-4e57-a683-d25ba32d16bd", 00:11:10.219 "is_configured": true, 00:11:10.219 "data_offset": 2048, 00:11:10.219 "data_size": 63488 00:11:10.219 }, 00:11:10.219 { 00:11:10.219 "name": "BaseBdev4", 00:11:10.219 "uuid": "1e6582c0-5b30-4a93-9046-83d2887beffd", 00:11:10.219 "is_configured": true, 00:11:10.219 "data_offset": 2048, 00:11:10.219 "data_size": 63488 00:11:10.219 } 00:11:10.219 ] 00:11:10.219 }' 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.219 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.499 [2024-11-20 17:45:37.615984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.499 "name": "Existed_Raid", 00:11:10.499 "aliases": [ 00:11:10.499 "fb647ed7-3744-4631-8995-a9ebd053661f" 00:11:10.499 ], 00:11:10.499 "product_name": "Raid Volume", 00:11:10.499 "block_size": 512, 00:11:10.499 "num_blocks": 253952, 00:11:10.499 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:10.499 "assigned_rate_limits": { 00:11:10.499 "rw_ios_per_sec": 0, 00:11:10.499 "rw_mbytes_per_sec": 0, 00:11:10.499 "r_mbytes_per_sec": 0, 00:11:10.499 "w_mbytes_per_sec": 0 00:11:10.499 }, 00:11:10.499 "claimed": false, 00:11:10.499 "zoned": false, 00:11:10.499 "supported_io_types": { 00:11:10.499 "read": true, 00:11:10.499 "write": true, 00:11:10.499 "unmap": true, 00:11:10.499 "flush": true, 00:11:10.499 "reset": true, 00:11:10.499 "nvme_admin": false, 00:11:10.499 "nvme_io": false, 00:11:10.499 "nvme_io_md": false, 00:11:10.499 "write_zeroes": true, 00:11:10.499 "zcopy": false, 00:11:10.499 "get_zone_info": false, 00:11:10.499 "zone_management": false, 00:11:10.499 "zone_append": false, 00:11:10.499 "compare": false, 00:11:10.499 "compare_and_write": false, 00:11:10.499 "abort": false, 00:11:10.499 "seek_hole": false, 00:11:10.499 "seek_data": false, 00:11:10.499 "copy": false, 00:11:10.499 "nvme_iov_md": false 00:11:10.499 }, 00:11:10.499 "memory_domains": [ 00:11:10.499 { 00:11:10.499 "dma_device_id": "system", 00:11:10.499 "dma_device_type": 1 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.499 "dma_device_type": 2 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "system", 00:11:10.499 "dma_device_type": 1 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.499 "dma_device_type": 2 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "system", 00:11:10.499 "dma_device_type": 1 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.499 "dma_device_type": 2 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "system", 00:11:10.499 "dma_device_type": 1 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.499 "dma_device_type": 2 00:11:10.499 } 00:11:10.499 ], 00:11:10.499 "driver_specific": { 00:11:10.499 "raid": { 00:11:10.499 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:10.499 "strip_size_kb": 64, 00:11:10.499 "state": "online", 00:11:10.499 "raid_level": "raid0", 00:11:10.499 "superblock": true, 00:11:10.499 "num_base_bdevs": 4, 00:11:10.499 "num_base_bdevs_discovered": 4, 00:11:10.499 "num_base_bdevs_operational": 4, 00:11:10.499 "base_bdevs_list": [ 00:11:10.499 { 00:11:10.499 "name": "BaseBdev1", 00:11:10.499 "uuid": "4d833af5-6ebb-4bd7-805a-bfb6b3aaf80e", 00:11:10.499 "is_configured": true, 00:11:10.499 "data_offset": 2048, 00:11:10.499 "data_size": 63488 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "name": "BaseBdev2", 00:11:10.499 "uuid": "5951b296-bdf5-4ce6-808a-bf03e54daa20", 00:11:10.499 "is_configured": true, 00:11:10.499 "data_offset": 2048, 00:11:10.499 "data_size": 63488 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "name": "BaseBdev3", 00:11:10.499 "uuid": "86ee704e-34ed-4e57-a683-d25ba32d16bd", 00:11:10.499 "is_configured": true, 00:11:10.499 "data_offset": 2048, 00:11:10.499 "data_size": 63488 00:11:10.499 }, 00:11:10.499 { 00:11:10.499 "name": "BaseBdev4", 00:11:10.499 "uuid": "1e6582c0-5b30-4a93-9046-83d2887beffd", 00:11:10.499 "is_configured": true, 00:11:10.499 "data_offset": 2048, 00:11:10.499 "data_size": 63488 00:11:10.499 } 00:11:10.499 ] 00:11:10.499 } 00:11:10.499 } 00:11:10.499 }' 00:11:10.499 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:10.768 BaseBdev2 00:11:10.768 BaseBdev3 00:11:10.768 BaseBdev4' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.768 17:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 [2024-11-20 17:45:37.911190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.768 [2024-11-20 17:45:37.911329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.768 [2024-11-20 17:45:37.911433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:11.028 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.029 "name": "Existed_Raid", 00:11:11.029 "uuid": "fb647ed7-3744-4631-8995-a9ebd053661f", 00:11:11.029 "strip_size_kb": 64, 00:11:11.029 "state": "offline", 00:11:11.029 "raid_level": "raid0", 00:11:11.029 "superblock": true, 00:11:11.029 "num_base_bdevs": 4, 00:11:11.029 "num_base_bdevs_discovered": 3, 00:11:11.029 "num_base_bdevs_operational": 3, 00:11:11.029 "base_bdevs_list": [ 00:11:11.029 { 00:11:11.029 "name": null, 00:11:11.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.029 "is_configured": false, 00:11:11.029 "data_offset": 0, 00:11:11.029 "data_size": 63488 00:11:11.029 }, 00:11:11.029 { 00:11:11.029 "name": "BaseBdev2", 00:11:11.029 "uuid": "5951b296-bdf5-4ce6-808a-bf03e54daa20", 00:11:11.029 "is_configured": true, 00:11:11.029 "data_offset": 2048, 00:11:11.029 "data_size": 63488 00:11:11.029 }, 00:11:11.029 { 00:11:11.029 "name": "BaseBdev3", 00:11:11.029 "uuid": "86ee704e-34ed-4e57-a683-d25ba32d16bd", 00:11:11.029 "is_configured": true, 00:11:11.029 "data_offset": 2048, 00:11:11.029 "data_size": 63488 00:11:11.029 }, 00:11:11.029 { 00:11:11.029 "name": "BaseBdev4", 00:11:11.029 "uuid": "1e6582c0-5b30-4a93-9046-83d2887beffd", 00:11:11.029 "is_configured": true, 00:11:11.029 "data_offset": 2048, 00:11:11.029 "data_size": 63488 00:11:11.029 } 00:11:11.029 ] 00:11:11.029 }' 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.029 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.598 [2024-11-20 17:45:38.575016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.598 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.598 [2024-11-20 17:45:38.751234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.858 17:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.858 [2024-11-20 17:45:38.930304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:11.858 [2024-11-20 17:45:38.930389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.119 BaseBdev2 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.119 [ 00:11:12.119 { 00:11:12.119 "name": "BaseBdev2", 00:11:12.119 "aliases": [ 00:11:12.119 "b5579230-92db-40e9-aaa8-5cb51a8134f0" 00:11:12.119 ], 00:11:12.119 "product_name": "Malloc disk", 00:11:12.119 "block_size": 512, 00:11:12.119 "num_blocks": 65536, 00:11:12.119 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:12.119 "assigned_rate_limits": { 00:11:12.119 "rw_ios_per_sec": 0, 00:11:12.119 "rw_mbytes_per_sec": 0, 00:11:12.119 "r_mbytes_per_sec": 0, 00:11:12.119 "w_mbytes_per_sec": 0 00:11:12.119 }, 00:11:12.119 "claimed": false, 00:11:12.119 "zoned": false, 00:11:12.119 "supported_io_types": { 00:11:12.119 "read": true, 00:11:12.119 "write": true, 00:11:12.119 "unmap": true, 00:11:12.119 "flush": true, 00:11:12.119 "reset": true, 00:11:12.119 "nvme_admin": false, 00:11:12.119 "nvme_io": false, 00:11:12.119 "nvme_io_md": false, 00:11:12.119 "write_zeroes": true, 00:11:12.119 "zcopy": true, 00:11:12.119 "get_zone_info": false, 00:11:12.119 "zone_management": false, 00:11:12.119 "zone_append": false, 00:11:12.119 "compare": false, 00:11:12.119 "compare_and_write": false, 00:11:12.119 "abort": true, 00:11:12.119 "seek_hole": false, 00:11:12.119 "seek_data": false, 00:11:12.119 "copy": true, 00:11:12.119 "nvme_iov_md": false 00:11:12.119 }, 00:11:12.119 "memory_domains": [ 00:11:12.119 { 00:11:12.119 "dma_device_id": "system", 00:11:12.119 "dma_device_type": 1 00:11:12.119 }, 00:11:12.119 { 00:11:12.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.119 "dma_device_type": 2 00:11:12.119 } 00:11:12.119 ], 00:11:12.119 "driver_specific": {} 00:11:12.119 } 00:11:12.119 ] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.119 BaseBdev3 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:12.119 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.120 [ 00:11:12.120 { 00:11:12.120 "name": "BaseBdev3", 00:11:12.120 "aliases": [ 00:11:12.120 "e7bbf0d7-93b7-4aa6-bebb-7e326957415f" 00:11:12.120 ], 00:11:12.120 "product_name": "Malloc disk", 00:11:12.120 "block_size": 512, 00:11:12.120 "num_blocks": 65536, 00:11:12.120 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:12.120 "assigned_rate_limits": { 00:11:12.120 "rw_ios_per_sec": 0, 00:11:12.120 "rw_mbytes_per_sec": 0, 00:11:12.120 "r_mbytes_per_sec": 0, 00:11:12.120 "w_mbytes_per_sec": 0 00:11:12.120 }, 00:11:12.120 "claimed": false, 00:11:12.120 "zoned": false, 00:11:12.120 "supported_io_types": { 00:11:12.120 "read": true, 00:11:12.120 "write": true, 00:11:12.120 "unmap": true, 00:11:12.120 "flush": true, 00:11:12.120 "reset": true, 00:11:12.120 "nvme_admin": false, 00:11:12.120 "nvme_io": false, 00:11:12.120 "nvme_io_md": false, 00:11:12.120 "write_zeroes": true, 00:11:12.120 "zcopy": true, 00:11:12.120 "get_zone_info": false, 00:11:12.120 "zone_management": false, 00:11:12.120 "zone_append": false, 00:11:12.120 "compare": false, 00:11:12.120 "compare_and_write": false, 00:11:12.120 "abort": true, 00:11:12.120 "seek_hole": false, 00:11:12.120 "seek_data": false, 00:11:12.120 "copy": true, 00:11:12.120 "nvme_iov_md": false 00:11:12.120 }, 00:11:12.120 "memory_domains": [ 00:11:12.120 { 00:11:12.120 "dma_device_id": "system", 00:11:12.120 "dma_device_type": 1 00:11:12.120 }, 00:11:12.120 { 00:11:12.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.120 "dma_device_type": 2 00:11:12.120 } 00:11:12.120 ], 00:11:12.120 "driver_specific": {} 00:11:12.120 } 00:11:12.120 ] 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.120 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.379 BaseBdev4 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.379 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.379 [ 00:11:12.379 { 00:11:12.379 "name": "BaseBdev4", 00:11:12.379 "aliases": [ 00:11:12.379 "341b14ff-4e9c-4289-a280-903701add041" 00:11:12.379 ], 00:11:12.379 "product_name": "Malloc disk", 00:11:12.379 "block_size": 512, 00:11:12.379 "num_blocks": 65536, 00:11:12.379 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:12.379 "assigned_rate_limits": { 00:11:12.379 "rw_ios_per_sec": 0, 00:11:12.379 "rw_mbytes_per_sec": 0, 00:11:12.379 "r_mbytes_per_sec": 0, 00:11:12.379 "w_mbytes_per_sec": 0 00:11:12.379 }, 00:11:12.379 "claimed": false, 00:11:12.379 "zoned": false, 00:11:12.379 "supported_io_types": { 00:11:12.379 "read": true, 00:11:12.379 "write": true, 00:11:12.379 "unmap": true, 00:11:12.379 "flush": true, 00:11:12.379 "reset": true, 00:11:12.379 "nvme_admin": false, 00:11:12.379 "nvme_io": false, 00:11:12.379 "nvme_io_md": false, 00:11:12.379 "write_zeroes": true, 00:11:12.379 "zcopy": true, 00:11:12.379 "get_zone_info": false, 00:11:12.379 "zone_management": false, 00:11:12.379 "zone_append": false, 00:11:12.379 "compare": false, 00:11:12.379 "compare_and_write": false, 00:11:12.379 "abort": true, 00:11:12.379 "seek_hole": false, 00:11:12.379 "seek_data": false, 00:11:12.379 "copy": true, 00:11:12.379 "nvme_iov_md": false 00:11:12.380 }, 00:11:12.380 "memory_domains": [ 00:11:12.380 { 00:11:12.380 "dma_device_id": "system", 00:11:12.380 "dma_device_type": 1 00:11:12.380 }, 00:11:12.380 { 00:11:12.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.380 "dma_device_type": 2 00:11:12.380 } 00:11:12.380 ], 00:11:12.380 "driver_specific": {} 00:11:12.380 } 00:11:12.380 ] 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.380 [2024-11-20 17:45:39.386886] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.380 [2024-11-20 17:45:39.386960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.380 [2024-11-20 17:45:39.386998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.380 [2024-11-20 17:45:39.389639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.380 [2024-11-20 17:45:39.389709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.380 "name": "Existed_Raid", 00:11:12.380 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:12.380 "strip_size_kb": 64, 00:11:12.380 "state": "configuring", 00:11:12.380 "raid_level": "raid0", 00:11:12.380 "superblock": true, 00:11:12.380 "num_base_bdevs": 4, 00:11:12.380 "num_base_bdevs_discovered": 3, 00:11:12.380 "num_base_bdevs_operational": 4, 00:11:12.380 "base_bdevs_list": [ 00:11:12.380 { 00:11:12.380 "name": "BaseBdev1", 00:11:12.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.380 "is_configured": false, 00:11:12.380 "data_offset": 0, 00:11:12.380 "data_size": 0 00:11:12.380 }, 00:11:12.380 { 00:11:12.380 "name": "BaseBdev2", 00:11:12.380 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:12.380 "is_configured": true, 00:11:12.380 "data_offset": 2048, 00:11:12.380 "data_size": 63488 00:11:12.380 }, 00:11:12.380 { 00:11:12.380 "name": "BaseBdev3", 00:11:12.380 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:12.380 "is_configured": true, 00:11:12.380 "data_offset": 2048, 00:11:12.380 "data_size": 63488 00:11:12.380 }, 00:11:12.380 { 00:11:12.380 "name": "BaseBdev4", 00:11:12.380 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:12.380 "is_configured": true, 00:11:12.380 "data_offset": 2048, 00:11:12.380 "data_size": 63488 00:11:12.380 } 00:11:12.380 ] 00:11:12.380 }' 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.380 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.639 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:12.639 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.639 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.639 [2024-11-20 17:45:39.802155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.640 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.899 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.899 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.899 "name": "Existed_Raid", 00:11:12.899 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:12.899 "strip_size_kb": 64, 00:11:12.899 "state": "configuring", 00:11:12.899 "raid_level": "raid0", 00:11:12.899 "superblock": true, 00:11:12.899 "num_base_bdevs": 4, 00:11:12.899 "num_base_bdevs_discovered": 2, 00:11:12.899 "num_base_bdevs_operational": 4, 00:11:12.899 "base_bdevs_list": [ 00:11:12.899 { 00:11:12.899 "name": "BaseBdev1", 00:11:12.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.899 "is_configured": false, 00:11:12.899 "data_offset": 0, 00:11:12.899 "data_size": 0 00:11:12.899 }, 00:11:12.899 { 00:11:12.899 "name": null, 00:11:12.899 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:12.899 "is_configured": false, 00:11:12.899 "data_offset": 0, 00:11:12.899 "data_size": 63488 00:11:12.899 }, 00:11:12.899 { 00:11:12.899 "name": "BaseBdev3", 00:11:12.899 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:12.899 "is_configured": true, 00:11:12.899 "data_offset": 2048, 00:11:12.899 "data_size": 63488 00:11:12.899 }, 00:11:12.899 { 00:11:12.899 "name": "BaseBdev4", 00:11:12.899 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:12.899 "is_configured": true, 00:11:12.899 "data_offset": 2048, 00:11:12.899 "data_size": 63488 00:11:12.899 } 00:11:12.899 ] 00:11:12.899 }' 00:11:12.899 17:45:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.899 17:45:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.158 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.418 [2024-11-20 17:45:40.346886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.418 BaseBdev1 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.418 [ 00:11:13.418 { 00:11:13.418 "name": "BaseBdev1", 00:11:13.418 "aliases": [ 00:11:13.418 "e792a076-1bf9-4689-aa0b-38351313ec2a" 00:11:13.418 ], 00:11:13.418 "product_name": "Malloc disk", 00:11:13.418 "block_size": 512, 00:11:13.418 "num_blocks": 65536, 00:11:13.418 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:13.418 "assigned_rate_limits": { 00:11:13.418 "rw_ios_per_sec": 0, 00:11:13.418 "rw_mbytes_per_sec": 0, 00:11:13.418 "r_mbytes_per_sec": 0, 00:11:13.418 "w_mbytes_per_sec": 0 00:11:13.418 }, 00:11:13.418 "claimed": true, 00:11:13.418 "claim_type": "exclusive_write", 00:11:13.418 "zoned": false, 00:11:13.418 "supported_io_types": { 00:11:13.418 "read": true, 00:11:13.418 "write": true, 00:11:13.418 "unmap": true, 00:11:13.418 "flush": true, 00:11:13.418 "reset": true, 00:11:13.418 "nvme_admin": false, 00:11:13.418 "nvme_io": false, 00:11:13.418 "nvme_io_md": false, 00:11:13.418 "write_zeroes": true, 00:11:13.418 "zcopy": true, 00:11:13.418 "get_zone_info": false, 00:11:13.418 "zone_management": false, 00:11:13.418 "zone_append": false, 00:11:13.418 "compare": false, 00:11:13.418 "compare_and_write": false, 00:11:13.418 "abort": true, 00:11:13.418 "seek_hole": false, 00:11:13.418 "seek_data": false, 00:11:13.418 "copy": true, 00:11:13.418 "nvme_iov_md": false 00:11:13.418 }, 00:11:13.418 "memory_domains": [ 00:11:13.418 { 00:11:13.418 "dma_device_id": "system", 00:11:13.418 "dma_device_type": 1 00:11:13.418 }, 00:11:13.418 { 00:11:13.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.418 "dma_device_type": 2 00:11:13.418 } 00:11:13.418 ], 00:11:13.418 "driver_specific": {} 00:11:13.418 } 00:11:13.418 ] 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.418 "name": "Existed_Raid", 00:11:13.418 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:13.418 "strip_size_kb": 64, 00:11:13.418 "state": "configuring", 00:11:13.418 "raid_level": "raid0", 00:11:13.418 "superblock": true, 00:11:13.418 "num_base_bdevs": 4, 00:11:13.418 "num_base_bdevs_discovered": 3, 00:11:13.418 "num_base_bdevs_operational": 4, 00:11:13.418 "base_bdevs_list": [ 00:11:13.418 { 00:11:13.418 "name": "BaseBdev1", 00:11:13.418 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:13.418 "is_configured": true, 00:11:13.418 "data_offset": 2048, 00:11:13.418 "data_size": 63488 00:11:13.418 }, 00:11:13.418 { 00:11:13.418 "name": null, 00:11:13.418 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:13.418 "is_configured": false, 00:11:13.418 "data_offset": 0, 00:11:13.418 "data_size": 63488 00:11:13.418 }, 00:11:13.418 { 00:11:13.418 "name": "BaseBdev3", 00:11:13.418 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:13.418 "is_configured": true, 00:11:13.418 "data_offset": 2048, 00:11:13.418 "data_size": 63488 00:11:13.418 }, 00:11:13.418 { 00:11:13.418 "name": "BaseBdev4", 00:11:13.418 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:13.418 "is_configured": true, 00:11:13.418 "data_offset": 2048, 00:11:13.418 "data_size": 63488 00:11:13.418 } 00:11:13.418 ] 00:11:13.418 }' 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.418 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.678 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:13.678 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.678 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.678 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.937 [2024-11-20 17:45:40.862186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.937 "name": "Existed_Raid", 00:11:13.937 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:13.937 "strip_size_kb": 64, 00:11:13.937 "state": "configuring", 00:11:13.937 "raid_level": "raid0", 00:11:13.937 "superblock": true, 00:11:13.937 "num_base_bdevs": 4, 00:11:13.937 "num_base_bdevs_discovered": 2, 00:11:13.937 "num_base_bdevs_operational": 4, 00:11:13.937 "base_bdevs_list": [ 00:11:13.937 { 00:11:13.937 "name": "BaseBdev1", 00:11:13.937 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:13.937 "is_configured": true, 00:11:13.937 "data_offset": 2048, 00:11:13.937 "data_size": 63488 00:11:13.937 }, 00:11:13.937 { 00:11:13.937 "name": null, 00:11:13.937 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:13.937 "is_configured": false, 00:11:13.937 "data_offset": 0, 00:11:13.937 "data_size": 63488 00:11:13.937 }, 00:11:13.937 { 00:11:13.937 "name": null, 00:11:13.937 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:13.937 "is_configured": false, 00:11:13.937 "data_offset": 0, 00:11:13.937 "data_size": 63488 00:11:13.937 }, 00:11:13.937 { 00:11:13.937 "name": "BaseBdev4", 00:11:13.937 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:13.937 "is_configured": true, 00:11:13.937 "data_offset": 2048, 00:11:13.937 "data_size": 63488 00:11:13.937 } 00:11:13.937 ] 00:11:13.937 }' 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.937 17:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.196 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.196 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.196 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:14.196 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.455 [2024-11-20 17:45:41.413252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.455 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.456 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.456 "name": "Existed_Raid", 00:11:14.456 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:14.456 "strip_size_kb": 64, 00:11:14.456 "state": "configuring", 00:11:14.456 "raid_level": "raid0", 00:11:14.456 "superblock": true, 00:11:14.456 "num_base_bdevs": 4, 00:11:14.456 "num_base_bdevs_discovered": 3, 00:11:14.456 "num_base_bdevs_operational": 4, 00:11:14.456 "base_bdevs_list": [ 00:11:14.456 { 00:11:14.456 "name": "BaseBdev1", 00:11:14.456 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:14.456 "is_configured": true, 00:11:14.456 "data_offset": 2048, 00:11:14.456 "data_size": 63488 00:11:14.456 }, 00:11:14.456 { 00:11:14.456 "name": null, 00:11:14.456 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:14.456 "is_configured": false, 00:11:14.456 "data_offset": 0, 00:11:14.456 "data_size": 63488 00:11:14.456 }, 00:11:14.456 { 00:11:14.456 "name": "BaseBdev3", 00:11:14.456 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:14.456 "is_configured": true, 00:11:14.456 "data_offset": 2048, 00:11:14.456 "data_size": 63488 00:11:14.456 }, 00:11:14.456 { 00:11:14.456 "name": "BaseBdev4", 00:11:14.456 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:14.456 "is_configured": true, 00:11:14.456 "data_offset": 2048, 00:11:14.456 "data_size": 63488 00:11:14.456 } 00:11:14.456 ] 00:11:14.456 }' 00:11:14.456 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.456 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.716 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.716 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:14.716 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.716 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.716 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.991 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:14.991 17:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.991 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.991 17:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.991 [2024-11-20 17:45:41.912442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.991 "name": "Existed_Raid", 00:11:14.991 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:14.991 "strip_size_kb": 64, 00:11:14.991 "state": "configuring", 00:11:14.991 "raid_level": "raid0", 00:11:14.991 "superblock": true, 00:11:14.991 "num_base_bdevs": 4, 00:11:14.991 "num_base_bdevs_discovered": 2, 00:11:14.991 "num_base_bdevs_operational": 4, 00:11:14.991 "base_bdevs_list": [ 00:11:14.991 { 00:11:14.991 "name": null, 00:11:14.991 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:14.991 "is_configured": false, 00:11:14.991 "data_offset": 0, 00:11:14.991 "data_size": 63488 00:11:14.991 }, 00:11:14.991 { 00:11:14.991 "name": null, 00:11:14.991 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:14.991 "is_configured": false, 00:11:14.991 "data_offset": 0, 00:11:14.991 "data_size": 63488 00:11:14.991 }, 00:11:14.991 { 00:11:14.991 "name": "BaseBdev3", 00:11:14.991 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:14.991 "is_configured": true, 00:11:14.991 "data_offset": 2048, 00:11:14.991 "data_size": 63488 00:11:14.991 }, 00:11:14.991 { 00:11:14.991 "name": "BaseBdev4", 00:11:14.991 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:14.991 "is_configured": true, 00:11:14.991 "data_offset": 2048, 00:11:14.991 "data_size": 63488 00:11:14.991 } 00:11:14.991 ] 00:11:14.991 }' 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.991 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.559 [2024-11-20 17:45:42.512662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.559 "name": "Existed_Raid", 00:11:15.559 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:15.559 "strip_size_kb": 64, 00:11:15.559 "state": "configuring", 00:11:15.559 "raid_level": "raid0", 00:11:15.559 "superblock": true, 00:11:15.559 "num_base_bdevs": 4, 00:11:15.559 "num_base_bdevs_discovered": 3, 00:11:15.559 "num_base_bdevs_operational": 4, 00:11:15.559 "base_bdevs_list": [ 00:11:15.559 { 00:11:15.559 "name": null, 00:11:15.559 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:15.559 "is_configured": false, 00:11:15.559 "data_offset": 0, 00:11:15.559 "data_size": 63488 00:11:15.559 }, 00:11:15.559 { 00:11:15.559 "name": "BaseBdev2", 00:11:15.559 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:15.559 "is_configured": true, 00:11:15.559 "data_offset": 2048, 00:11:15.559 "data_size": 63488 00:11:15.559 }, 00:11:15.559 { 00:11:15.559 "name": "BaseBdev3", 00:11:15.559 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:15.559 "is_configured": true, 00:11:15.559 "data_offset": 2048, 00:11:15.559 "data_size": 63488 00:11:15.559 }, 00:11:15.559 { 00:11:15.559 "name": "BaseBdev4", 00:11:15.559 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:15.559 "is_configured": true, 00:11:15.559 "data_offset": 2048, 00:11:15.559 "data_size": 63488 00:11:15.559 } 00:11:15.559 ] 00:11:15.559 }' 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.559 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.818 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.818 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.818 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.818 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.818 17:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.079 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:16.079 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.079 17:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e792a076-1bf9-4689-aa0b-38351313ec2a 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 [2024-11-20 17:45:43.089829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:16.079 [2024-11-20 17:45:43.090213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:16.079 [2024-11-20 17:45:43.090231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.079 [2024-11-20 17:45:43.090524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:16.079 [2024-11-20 17:45:43.090667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:16.079 [2024-11-20 17:45:43.090679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:16.079 [2024-11-20 17:45:43.090819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.079 NewBaseBdev 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 [ 00:11:16.079 { 00:11:16.079 "name": "NewBaseBdev", 00:11:16.079 "aliases": [ 00:11:16.079 "e792a076-1bf9-4689-aa0b-38351313ec2a" 00:11:16.079 ], 00:11:16.079 "product_name": "Malloc disk", 00:11:16.079 "block_size": 512, 00:11:16.079 "num_blocks": 65536, 00:11:16.079 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:16.079 "assigned_rate_limits": { 00:11:16.079 "rw_ios_per_sec": 0, 00:11:16.079 "rw_mbytes_per_sec": 0, 00:11:16.079 "r_mbytes_per_sec": 0, 00:11:16.079 "w_mbytes_per_sec": 0 00:11:16.079 }, 00:11:16.079 "claimed": true, 00:11:16.079 "claim_type": "exclusive_write", 00:11:16.079 "zoned": false, 00:11:16.079 "supported_io_types": { 00:11:16.079 "read": true, 00:11:16.079 "write": true, 00:11:16.079 "unmap": true, 00:11:16.079 "flush": true, 00:11:16.079 "reset": true, 00:11:16.079 "nvme_admin": false, 00:11:16.079 "nvme_io": false, 00:11:16.079 "nvme_io_md": false, 00:11:16.079 "write_zeroes": true, 00:11:16.079 "zcopy": true, 00:11:16.079 "get_zone_info": false, 00:11:16.079 "zone_management": false, 00:11:16.079 "zone_append": false, 00:11:16.079 "compare": false, 00:11:16.079 "compare_and_write": false, 00:11:16.079 "abort": true, 00:11:16.079 "seek_hole": false, 00:11:16.079 "seek_data": false, 00:11:16.079 "copy": true, 00:11:16.079 "nvme_iov_md": false 00:11:16.079 }, 00:11:16.079 "memory_domains": [ 00:11:16.079 { 00:11:16.079 "dma_device_id": "system", 00:11:16.079 "dma_device_type": 1 00:11:16.079 }, 00:11:16.079 { 00:11:16.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.079 "dma_device_type": 2 00:11:16.079 } 00:11:16.079 ], 00:11:16.079 "driver_specific": {} 00:11:16.079 } 00:11:16.079 ] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.079 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.079 "name": "Existed_Raid", 00:11:16.079 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:16.079 "strip_size_kb": 64, 00:11:16.079 "state": "online", 00:11:16.079 "raid_level": "raid0", 00:11:16.079 "superblock": true, 00:11:16.079 "num_base_bdevs": 4, 00:11:16.079 "num_base_bdevs_discovered": 4, 00:11:16.079 "num_base_bdevs_operational": 4, 00:11:16.079 "base_bdevs_list": [ 00:11:16.079 { 00:11:16.079 "name": "NewBaseBdev", 00:11:16.079 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:16.079 "is_configured": true, 00:11:16.079 "data_offset": 2048, 00:11:16.079 "data_size": 63488 00:11:16.079 }, 00:11:16.079 { 00:11:16.079 "name": "BaseBdev2", 00:11:16.079 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:16.080 "is_configured": true, 00:11:16.080 "data_offset": 2048, 00:11:16.080 "data_size": 63488 00:11:16.080 }, 00:11:16.080 { 00:11:16.080 "name": "BaseBdev3", 00:11:16.080 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:16.080 "is_configured": true, 00:11:16.080 "data_offset": 2048, 00:11:16.080 "data_size": 63488 00:11:16.080 }, 00:11:16.080 { 00:11:16.080 "name": "BaseBdev4", 00:11:16.080 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:16.080 "is_configured": true, 00:11:16.080 "data_offset": 2048, 00:11:16.080 "data_size": 63488 00:11:16.080 } 00:11:16.080 ] 00:11:16.080 }' 00:11:16.080 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.080 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 [2024-11-20 17:45:43.605451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.651 "name": "Existed_Raid", 00:11:16.651 "aliases": [ 00:11:16.651 "e1c2fadd-89ac-4076-a9bc-0f850c336dc7" 00:11:16.651 ], 00:11:16.651 "product_name": "Raid Volume", 00:11:16.651 "block_size": 512, 00:11:16.651 "num_blocks": 253952, 00:11:16.651 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:16.651 "assigned_rate_limits": { 00:11:16.651 "rw_ios_per_sec": 0, 00:11:16.651 "rw_mbytes_per_sec": 0, 00:11:16.651 "r_mbytes_per_sec": 0, 00:11:16.651 "w_mbytes_per_sec": 0 00:11:16.651 }, 00:11:16.651 "claimed": false, 00:11:16.651 "zoned": false, 00:11:16.651 "supported_io_types": { 00:11:16.651 "read": true, 00:11:16.651 "write": true, 00:11:16.651 "unmap": true, 00:11:16.651 "flush": true, 00:11:16.651 "reset": true, 00:11:16.651 "nvme_admin": false, 00:11:16.651 "nvme_io": false, 00:11:16.651 "nvme_io_md": false, 00:11:16.651 "write_zeroes": true, 00:11:16.651 "zcopy": false, 00:11:16.651 "get_zone_info": false, 00:11:16.651 "zone_management": false, 00:11:16.651 "zone_append": false, 00:11:16.651 "compare": false, 00:11:16.651 "compare_and_write": false, 00:11:16.651 "abort": false, 00:11:16.651 "seek_hole": false, 00:11:16.651 "seek_data": false, 00:11:16.651 "copy": false, 00:11:16.651 "nvme_iov_md": false 00:11:16.651 }, 00:11:16.651 "memory_domains": [ 00:11:16.651 { 00:11:16.651 "dma_device_id": "system", 00:11:16.651 "dma_device_type": 1 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.651 "dma_device_type": 2 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "system", 00:11:16.651 "dma_device_type": 1 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.651 "dma_device_type": 2 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "system", 00:11:16.651 "dma_device_type": 1 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.651 "dma_device_type": 2 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "system", 00:11:16.651 "dma_device_type": 1 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.651 "dma_device_type": 2 00:11:16.651 } 00:11:16.651 ], 00:11:16.651 "driver_specific": { 00:11:16.651 "raid": { 00:11:16.651 "uuid": "e1c2fadd-89ac-4076-a9bc-0f850c336dc7", 00:11:16.651 "strip_size_kb": 64, 00:11:16.651 "state": "online", 00:11:16.651 "raid_level": "raid0", 00:11:16.651 "superblock": true, 00:11:16.651 "num_base_bdevs": 4, 00:11:16.651 "num_base_bdevs_discovered": 4, 00:11:16.651 "num_base_bdevs_operational": 4, 00:11:16.651 "base_bdevs_list": [ 00:11:16.651 { 00:11:16.651 "name": "NewBaseBdev", 00:11:16.651 "uuid": "e792a076-1bf9-4689-aa0b-38351313ec2a", 00:11:16.651 "is_configured": true, 00:11:16.651 "data_offset": 2048, 00:11:16.651 "data_size": 63488 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "name": "BaseBdev2", 00:11:16.651 "uuid": "b5579230-92db-40e9-aaa8-5cb51a8134f0", 00:11:16.651 "is_configured": true, 00:11:16.651 "data_offset": 2048, 00:11:16.651 "data_size": 63488 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "name": "BaseBdev3", 00:11:16.651 "uuid": "e7bbf0d7-93b7-4aa6-bebb-7e326957415f", 00:11:16.651 "is_configured": true, 00:11:16.651 "data_offset": 2048, 00:11:16.651 "data_size": 63488 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "name": "BaseBdev4", 00:11:16.651 "uuid": "341b14ff-4e9c-4289-a280-903701add041", 00:11:16.651 "is_configured": true, 00:11:16.651 "data_offset": 2048, 00:11:16.651 "data_size": 63488 00:11:16.651 } 00:11:16.651 ] 00:11:16.651 } 00:11:16.651 } 00:11:16.651 }' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:16.651 BaseBdev2 00:11:16.651 BaseBdev3 00:11:16.651 BaseBdev4' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.911 [2024-11-20 17:45:43.952476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.911 [2024-11-20 17:45:43.952599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.911 [2024-11-20 17:45:43.952715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.911 [2024-11-20 17:45:43.952938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.911 [2024-11-20 17:45:43.952982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70471 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70471 ']' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70471 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70471 00:11:16.911 killing process with pid 70471 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70471' 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70471 00:11:16.911 [2024-11-20 17:45:43.999542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.911 17:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70471 00:11:17.481 [2024-11-20 17:45:44.439923] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.920 17:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:18.920 00:11:18.920 real 0m12.026s 00:11:18.920 user 0m18.810s 00:11:18.920 sys 0m2.222s 00:11:18.920 ************************************ 00:11:18.920 END TEST raid_state_function_test_sb 00:11:18.920 ************************************ 00:11:18.920 17:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.920 17:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.920 17:45:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:18.920 17:45:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.920 17:45:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.920 17:45:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.920 ************************************ 00:11:18.920 START TEST raid_superblock_test 00:11:18.920 ************************************ 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71151 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71151 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71151 ']' 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:18.920 17:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.920 [2024-11-20 17:45:45.845924] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:18.920 [2024-11-20 17:45:45.846190] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71151 ] 00:11:18.920 [2024-11-20 17:45:46.023904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.180 [2024-11-20 17:45:46.164104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.439 [2024-11-20 17:45:46.398387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.439 [2024-11-20 17:45:46.398591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.699 malloc1 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.699 [2024-11-20 17:45:46.731374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:19.699 [2024-11-20 17:45:46.731448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.699 [2024-11-20 17:45:46.731473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:19.699 [2024-11-20 17:45:46.731483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.699 [2024-11-20 17:45:46.733907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.699 [2024-11-20 17:45:46.733944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:19.699 pt1 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.699 malloc2 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.699 [2024-11-20 17:45:46.793262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.699 [2024-11-20 17:45:46.793332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.699 [2024-11-20 17:45:46.793361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:19.699 [2024-11-20 17:45:46.793371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.699 [2024-11-20 17:45:46.795715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.699 [2024-11-20 17:45:46.795750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.699 pt2 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.699 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.700 malloc3 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.700 [2024-11-20 17:45:46.866268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:19.700 [2024-11-20 17:45:46.866335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.700 [2024-11-20 17:45:46.866360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:19.700 [2024-11-20 17:45:46.866371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.700 [2024-11-20 17:45:46.868841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.700 [2024-11-20 17:45:46.868878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:19.700 pt3 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.700 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.960 malloc4 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.960 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.960 [2024-11-20 17:45:46.932770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:19.960 [2024-11-20 17:45:46.932839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.960 [2024-11-20 17:45:46.932861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:19.960 [2024-11-20 17:45:46.932871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.961 [2024-11-20 17:45:46.935281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.961 [2024-11-20 17:45:46.935315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:19.961 pt4 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.961 [2024-11-20 17:45:46.944762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:19.961 [2024-11-20 17:45:46.946884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.961 [2024-11-20 17:45:46.946977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:19.961 [2024-11-20 17:45:46.947041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:19.961 [2024-11-20 17:45:46.947230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:19.961 [2024-11-20 17:45:46.947248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:19.961 [2024-11-20 17:45:46.947504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:19.961 [2024-11-20 17:45:46.947686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:19.961 [2024-11-20 17:45:46.947705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:19.961 [2024-11-20 17:45:46.947866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.961 17:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.961 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.961 "name": "raid_bdev1", 00:11:19.961 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:19.961 "strip_size_kb": 64, 00:11:19.961 "state": "online", 00:11:19.961 "raid_level": "raid0", 00:11:19.961 "superblock": true, 00:11:19.961 "num_base_bdevs": 4, 00:11:19.961 "num_base_bdevs_discovered": 4, 00:11:19.961 "num_base_bdevs_operational": 4, 00:11:19.961 "base_bdevs_list": [ 00:11:19.961 { 00:11:19.961 "name": "pt1", 00:11:19.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.961 "is_configured": true, 00:11:19.961 "data_offset": 2048, 00:11:19.961 "data_size": 63488 00:11:19.961 }, 00:11:19.961 { 00:11:19.961 "name": "pt2", 00:11:19.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.961 "is_configured": true, 00:11:19.961 "data_offset": 2048, 00:11:19.961 "data_size": 63488 00:11:19.961 }, 00:11:19.961 { 00:11:19.961 "name": "pt3", 00:11:19.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.961 "is_configured": true, 00:11:19.961 "data_offset": 2048, 00:11:19.961 "data_size": 63488 00:11:19.961 }, 00:11:19.961 { 00:11:19.961 "name": "pt4", 00:11:19.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.961 "is_configured": true, 00:11:19.961 "data_offset": 2048, 00:11:19.961 "data_size": 63488 00:11:19.961 } 00:11:19.961 ] 00:11:19.961 }' 00:11:19.961 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.961 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.221 [2024-11-20 17:45:47.328475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.221 "name": "raid_bdev1", 00:11:20.221 "aliases": [ 00:11:20.221 "2b64cb15-68d3-41c4-8e62-e73fce0906fe" 00:11:20.221 ], 00:11:20.221 "product_name": "Raid Volume", 00:11:20.221 "block_size": 512, 00:11:20.221 "num_blocks": 253952, 00:11:20.221 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:20.221 "assigned_rate_limits": { 00:11:20.221 "rw_ios_per_sec": 0, 00:11:20.221 "rw_mbytes_per_sec": 0, 00:11:20.221 "r_mbytes_per_sec": 0, 00:11:20.221 "w_mbytes_per_sec": 0 00:11:20.221 }, 00:11:20.221 "claimed": false, 00:11:20.221 "zoned": false, 00:11:20.221 "supported_io_types": { 00:11:20.221 "read": true, 00:11:20.221 "write": true, 00:11:20.221 "unmap": true, 00:11:20.221 "flush": true, 00:11:20.221 "reset": true, 00:11:20.221 "nvme_admin": false, 00:11:20.221 "nvme_io": false, 00:11:20.221 "nvme_io_md": false, 00:11:20.221 "write_zeroes": true, 00:11:20.221 "zcopy": false, 00:11:20.221 "get_zone_info": false, 00:11:20.221 "zone_management": false, 00:11:20.221 "zone_append": false, 00:11:20.221 "compare": false, 00:11:20.221 "compare_and_write": false, 00:11:20.221 "abort": false, 00:11:20.221 "seek_hole": false, 00:11:20.221 "seek_data": false, 00:11:20.221 "copy": false, 00:11:20.221 "nvme_iov_md": false 00:11:20.221 }, 00:11:20.221 "memory_domains": [ 00:11:20.221 { 00:11:20.221 "dma_device_id": "system", 00:11:20.221 "dma_device_type": 1 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.221 "dma_device_type": 2 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "system", 00:11:20.221 "dma_device_type": 1 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.221 "dma_device_type": 2 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "system", 00:11:20.221 "dma_device_type": 1 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.221 "dma_device_type": 2 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "system", 00:11:20.221 "dma_device_type": 1 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.221 "dma_device_type": 2 00:11:20.221 } 00:11:20.221 ], 00:11:20.221 "driver_specific": { 00:11:20.221 "raid": { 00:11:20.221 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:20.221 "strip_size_kb": 64, 00:11:20.221 "state": "online", 00:11:20.221 "raid_level": "raid0", 00:11:20.221 "superblock": true, 00:11:20.221 "num_base_bdevs": 4, 00:11:20.221 "num_base_bdevs_discovered": 4, 00:11:20.221 "num_base_bdevs_operational": 4, 00:11:20.221 "base_bdevs_list": [ 00:11:20.221 { 00:11:20.221 "name": "pt1", 00:11:20.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.221 "is_configured": true, 00:11:20.221 "data_offset": 2048, 00:11:20.221 "data_size": 63488 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "name": "pt2", 00:11:20.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.221 "is_configured": true, 00:11:20.221 "data_offset": 2048, 00:11:20.221 "data_size": 63488 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "name": "pt3", 00:11:20.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.221 "is_configured": true, 00:11:20.221 "data_offset": 2048, 00:11:20.221 "data_size": 63488 00:11:20.221 }, 00:11:20.221 { 00:11:20.221 "name": "pt4", 00:11:20.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.221 "is_configured": true, 00:11:20.221 "data_offset": 2048, 00:11:20.221 "data_size": 63488 00:11:20.221 } 00:11:20.221 ] 00:11:20.221 } 00:11:20.221 } 00:11:20.221 }' 00:11:20.221 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.481 pt2 00:11:20.481 pt3 00:11:20.481 pt4' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.481 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:20.481 [2024-11-20 17:45:47.647824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2b64cb15-68d3-41c4-8e62-e73fce0906fe 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2b64cb15-68d3-41c4-8e62-e73fce0906fe ']' 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.741 [2024-11-20 17:45:47.691488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.741 [2024-11-20 17:45:47.691534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.741 [2024-11-20 17:45:47.691649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.741 [2024-11-20 17:45:47.691734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.741 [2024-11-20 17:45:47.691753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.741 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 [2024-11-20 17:45:47.851193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:20.742 [2024-11-20 17:45:47.853415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:20.742 [2024-11-20 17:45:47.853469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:20.742 [2024-11-20 17:45:47.853502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:20.742 [2024-11-20 17:45:47.853559] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:20.742 [2024-11-20 17:45:47.853614] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:20.742 [2024-11-20 17:45:47.853633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:20.742 [2024-11-20 17:45:47.853651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:20.742 [2024-11-20 17:45:47.853665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.742 [2024-11-20 17:45:47.853679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:20.742 request: 00:11:20.742 { 00:11:20.742 "name": "raid_bdev1", 00:11:20.742 "raid_level": "raid0", 00:11:20.742 "base_bdevs": [ 00:11:20.742 "malloc1", 00:11:20.742 "malloc2", 00:11:20.742 "malloc3", 00:11:20.742 "malloc4" 00:11:20.742 ], 00:11:20.742 "strip_size_kb": 64, 00:11:20.742 "superblock": false, 00:11:20.742 "method": "bdev_raid_create", 00:11:20.742 "req_id": 1 00:11:20.742 } 00:11:20.742 Got JSON-RPC error response 00:11:20.742 response: 00:11:20.742 { 00:11:20.742 "code": -17, 00:11:20.742 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:20.742 } 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 [2024-11-20 17:45:47.899123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.742 [2024-11-20 17:45:47.899171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.742 [2024-11-20 17:45:47.899191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:20.742 [2024-11-20 17:45:47.899202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.742 [2024-11-20 17:45:47.901731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.742 [2024-11-20 17:45:47.901771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.742 [2024-11-20 17:45:47.901844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:20.742 [2024-11-20 17:45:47.901904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.742 pt1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.742 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.002 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.002 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.002 "name": "raid_bdev1", 00:11:21.003 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:21.003 "strip_size_kb": 64, 00:11:21.003 "state": "configuring", 00:11:21.003 "raid_level": "raid0", 00:11:21.003 "superblock": true, 00:11:21.003 "num_base_bdevs": 4, 00:11:21.003 "num_base_bdevs_discovered": 1, 00:11:21.003 "num_base_bdevs_operational": 4, 00:11:21.003 "base_bdevs_list": [ 00:11:21.003 { 00:11:21.003 "name": "pt1", 00:11:21.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.003 "is_configured": true, 00:11:21.003 "data_offset": 2048, 00:11:21.003 "data_size": 63488 00:11:21.003 }, 00:11:21.003 { 00:11:21.003 "name": null, 00:11:21.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.003 "is_configured": false, 00:11:21.003 "data_offset": 2048, 00:11:21.003 "data_size": 63488 00:11:21.003 }, 00:11:21.003 { 00:11:21.003 "name": null, 00:11:21.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.003 "is_configured": false, 00:11:21.003 "data_offset": 2048, 00:11:21.003 "data_size": 63488 00:11:21.003 }, 00:11:21.003 { 00:11:21.003 "name": null, 00:11:21.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.003 "is_configured": false, 00:11:21.003 "data_offset": 2048, 00:11:21.003 "data_size": 63488 00:11:21.003 } 00:11:21.003 ] 00:11:21.003 }' 00:11:21.003 17:45:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.003 17:45:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 [2024-11-20 17:45:48.350329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.264 [2024-11-20 17:45:48.350385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.264 [2024-11-20 17:45:48.350402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:21.264 [2024-11-20 17:45:48.350413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.264 [2024-11-20 17:45:48.350822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.264 [2024-11-20 17:45:48.350850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.264 [2024-11-20 17:45:48.350913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.264 [2024-11-20 17:45:48.350935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.264 pt2 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 [2024-11-20 17:45:48.362352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.264 "name": "raid_bdev1", 00:11:21.264 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:21.264 "strip_size_kb": 64, 00:11:21.264 "state": "configuring", 00:11:21.264 "raid_level": "raid0", 00:11:21.264 "superblock": true, 00:11:21.264 "num_base_bdevs": 4, 00:11:21.264 "num_base_bdevs_discovered": 1, 00:11:21.264 "num_base_bdevs_operational": 4, 00:11:21.264 "base_bdevs_list": [ 00:11:21.264 { 00:11:21.264 "name": "pt1", 00:11:21.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.264 "is_configured": true, 00:11:21.264 "data_offset": 2048, 00:11:21.264 "data_size": 63488 00:11:21.264 }, 00:11:21.264 { 00:11:21.264 "name": null, 00:11:21.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.264 "is_configured": false, 00:11:21.264 "data_offset": 0, 00:11:21.264 "data_size": 63488 00:11:21.264 }, 00:11:21.264 { 00:11:21.264 "name": null, 00:11:21.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.264 "is_configured": false, 00:11:21.264 "data_offset": 2048, 00:11:21.264 "data_size": 63488 00:11:21.264 }, 00:11:21.264 { 00:11:21.264 "name": null, 00:11:21.264 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.264 "is_configured": false, 00:11:21.264 "data_offset": 2048, 00:11:21.264 "data_size": 63488 00:11:21.264 } 00:11:21.264 ] 00:11:21.264 }' 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.264 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 [2024-11-20 17:45:48.821602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.835 [2024-11-20 17:45:48.821700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.835 [2024-11-20 17:45:48.821724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:21.835 [2024-11-20 17:45:48.821735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.835 [2024-11-20 17:45:48.822279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.835 [2024-11-20 17:45:48.822310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.835 [2024-11-20 17:45:48.822415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.835 [2024-11-20 17:45:48.822446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.835 pt2 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 [2024-11-20 17:45:48.833516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:21.835 [2024-11-20 17:45:48.833570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.835 [2024-11-20 17:45:48.833591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:21.835 [2024-11-20 17:45:48.833601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.835 [2024-11-20 17:45:48.834039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.835 [2024-11-20 17:45:48.834063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:21.835 [2024-11-20 17:45:48.834136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:21.835 [2024-11-20 17:45:48.834163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:21.835 pt3 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 [2024-11-20 17:45:48.845465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:21.835 [2024-11-20 17:45:48.845511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.835 [2024-11-20 17:45:48.845528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:21.835 [2024-11-20 17:45:48.845536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.835 [2024-11-20 17:45:48.845919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.835 [2024-11-20 17:45:48.845943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:21.835 [2024-11-20 17:45:48.846007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:21.835 [2024-11-20 17:45:48.846044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:21.835 [2024-11-20 17:45:48.846191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.835 [2024-11-20 17:45:48.846204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.835 [2024-11-20 17:45:48.846461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:21.835 [2024-11-20 17:45:48.846630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.835 [2024-11-20 17:45:48.846648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:21.835 [2024-11-20 17:45:48.846779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.835 pt4 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.835 "name": "raid_bdev1", 00:11:21.835 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:21.835 "strip_size_kb": 64, 00:11:21.835 "state": "online", 00:11:21.835 "raid_level": "raid0", 00:11:21.835 "superblock": true, 00:11:21.835 "num_base_bdevs": 4, 00:11:21.835 "num_base_bdevs_discovered": 4, 00:11:21.835 "num_base_bdevs_operational": 4, 00:11:21.835 "base_bdevs_list": [ 00:11:21.835 { 00:11:21.835 "name": "pt1", 00:11:21.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.835 "is_configured": true, 00:11:21.835 "data_offset": 2048, 00:11:21.835 "data_size": 63488 00:11:21.835 }, 00:11:21.835 { 00:11:21.835 "name": "pt2", 00:11:21.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.835 "is_configured": true, 00:11:21.835 "data_offset": 2048, 00:11:21.835 "data_size": 63488 00:11:21.835 }, 00:11:21.835 { 00:11:21.835 "name": "pt3", 00:11:21.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.835 "is_configured": true, 00:11:21.835 "data_offset": 2048, 00:11:21.835 "data_size": 63488 00:11:21.835 }, 00:11:21.835 { 00:11:21.835 "name": "pt4", 00:11:21.835 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.835 "is_configured": true, 00:11:21.835 "data_offset": 2048, 00:11:21.835 "data_size": 63488 00:11:21.835 } 00:11:21.835 ] 00:11:21.835 }' 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.835 17:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.404 [2024-11-20 17:45:49.341107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.404 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.404 "name": "raid_bdev1", 00:11:22.404 "aliases": [ 00:11:22.404 "2b64cb15-68d3-41c4-8e62-e73fce0906fe" 00:11:22.404 ], 00:11:22.404 "product_name": "Raid Volume", 00:11:22.404 "block_size": 512, 00:11:22.404 "num_blocks": 253952, 00:11:22.404 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:22.404 "assigned_rate_limits": { 00:11:22.404 "rw_ios_per_sec": 0, 00:11:22.404 "rw_mbytes_per_sec": 0, 00:11:22.404 "r_mbytes_per_sec": 0, 00:11:22.404 "w_mbytes_per_sec": 0 00:11:22.404 }, 00:11:22.404 "claimed": false, 00:11:22.404 "zoned": false, 00:11:22.404 "supported_io_types": { 00:11:22.404 "read": true, 00:11:22.404 "write": true, 00:11:22.404 "unmap": true, 00:11:22.404 "flush": true, 00:11:22.404 "reset": true, 00:11:22.404 "nvme_admin": false, 00:11:22.404 "nvme_io": false, 00:11:22.404 "nvme_io_md": false, 00:11:22.404 "write_zeroes": true, 00:11:22.404 "zcopy": false, 00:11:22.404 "get_zone_info": false, 00:11:22.404 "zone_management": false, 00:11:22.404 "zone_append": false, 00:11:22.404 "compare": false, 00:11:22.404 "compare_and_write": false, 00:11:22.404 "abort": false, 00:11:22.404 "seek_hole": false, 00:11:22.404 "seek_data": false, 00:11:22.404 "copy": false, 00:11:22.404 "nvme_iov_md": false 00:11:22.404 }, 00:11:22.404 "memory_domains": [ 00:11:22.404 { 00:11:22.404 "dma_device_id": "system", 00:11:22.404 "dma_device_type": 1 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.404 "dma_device_type": 2 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "system", 00:11:22.404 "dma_device_type": 1 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.404 "dma_device_type": 2 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "system", 00:11:22.404 "dma_device_type": 1 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.404 "dma_device_type": 2 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "system", 00:11:22.404 "dma_device_type": 1 00:11:22.404 }, 00:11:22.404 { 00:11:22.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.404 "dma_device_type": 2 00:11:22.404 } 00:11:22.404 ], 00:11:22.404 "driver_specific": { 00:11:22.404 "raid": { 00:11:22.404 "uuid": "2b64cb15-68d3-41c4-8e62-e73fce0906fe", 00:11:22.405 "strip_size_kb": 64, 00:11:22.405 "state": "online", 00:11:22.405 "raid_level": "raid0", 00:11:22.405 "superblock": true, 00:11:22.405 "num_base_bdevs": 4, 00:11:22.405 "num_base_bdevs_discovered": 4, 00:11:22.405 "num_base_bdevs_operational": 4, 00:11:22.405 "base_bdevs_list": [ 00:11:22.405 { 00:11:22.405 "name": "pt1", 00:11:22.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.405 "is_configured": true, 00:11:22.405 "data_offset": 2048, 00:11:22.405 "data_size": 63488 00:11:22.405 }, 00:11:22.405 { 00:11:22.405 "name": "pt2", 00:11:22.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.405 "is_configured": true, 00:11:22.405 "data_offset": 2048, 00:11:22.405 "data_size": 63488 00:11:22.405 }, 00:11:22.405 { 00:11:22.405 "name": "pt3", 00:11:22.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.405 "is_configured": true, 00:11:22.405 "data_offset": 2048, 00:11:22.405 "data_size": 63488 00:11:22.405 }, 00:11:22.405 { 00:11:22.405 "name": "pt4", 00:11:22.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.405 "is_configured": true, 00:11:22.405 "data_offset": 2048, 00:11:22.405 "data_size": 63488 00:11:22.405 } 00:11:22.405 ] 00:11:22.405 } 00:11:22.405 } 00:11:22.405 }' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:22.405 pt2 00:11:22.405 pt3 00:11:22.405 pt4' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.405 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.664 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.664 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.664 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.665 [2024-11-20 17:45:49.652419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2b64cb15-68d3-41c4-8e62-e73fce0906fe '!=' 2b64cb15-68d3-41c4-8e62-e73fce0906fe ']' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71151 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71151 ']' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71151 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71151 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.665 killing process with pid 71151 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71151' 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71151 00:11:22.665 [2024-11-20 17:45:49.709829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.665 [2024-11-20 17:45:49.709940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.665 17:45:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71151 00:11:22.665 [2024-11-20 17:45:49.710046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.665 [2024-11-20 17:45:49.710057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:23.234 [2024-11-20 17:45:50.152629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.615 17:45:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:24.615 00:11:24.615 real 0m5.641s 00:11:24.615 user 0m7.860s 00:11:24.615 sys 0m1.018s 00:11:24.615 17:45:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.615 17:45:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.615 ************************************ 00:11:24.615 END TEST raid_superblock_test 00:11:24.615 ************************************ 00:11:24.615 17:45:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:24.615 17:45:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:24.615 17:45:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.615 17:45:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.615 ************************************ 00:11:24.615 START TEST raid_read_error_test 00:11:24.615 ************************************ 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9YoKe4B9dO 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71410 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71410 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71410 ']' 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.615 17:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.615 [2024-11-20 17:45:51.569192] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:24.615 [2024-11-20 17:45:51.569317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71410 ] 00:11:24.615 [2024-11-20 17:45:51.744898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.875 [2024-11-20 17:45:51.885755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.135 [2024-11-20 17:45:52.127572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.135 [2024-11-20 17:45:52.127783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.395 BaseBdev1_malloc 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.395 true 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.395 [2024-11-20 17:45:52.503728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:25.395 [2024-11-20 17:45:52.503805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.395 [2024-11-20 17:45:52.503827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:25.395 [2024-11-20 17:45:52.503838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.395 [2024-11-20 17:45:52.506301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.395 [2024-11-20 17:45:52.506344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.395 BaseBdev1 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.395 BaseBdev2_malloc 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.395 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 true 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 [2024-11-20 17:45:52.579836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.655 [2024-11-20 17:45:52.580029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.655 [2024-11-20 17:45:52.580055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:25.655 [2024-11-20 17:45:52.580068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.655 [2024-11-20 17:45:52.582569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.655 [2024-11-20 17:45:52.582612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.655 BaseBdev2 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 BaseBdev3_malloc 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 true 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.655 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 [2024-11-20 17:45:52.667694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:25.655 [2024-11-20 17:45:52.667764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.655 [2024-11-20 17:45:52.667786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:25.655 [2024-11-20 17:45:52.667797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.656 [2024-11-20 17:45:52.670463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.656 [2024-11-20 17:45:52.670603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:25.656 BaseBdev3 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.656 BaseBdev4_malloc 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.656 true 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.656 [2024-11-20 17:45:52.743152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:25.656 [2024-11-20 17:45:52.743218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.656 [2024-11-20 17:45:52.743235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.656 [2024-11-20 17:45:52.743247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.656 [2024-11-20 17:45:52.745598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.656 [2024-11-20 17:45:52.745727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:25.656 BaseBdev4 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.656 [2024-11-20 17:45:52.755218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.656 [2024-11-20 17:45:52.757406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.656 [2024-11-20 17:45:52.757487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.656 [2024-11-20 17:45:52.757551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.656 [2024-11-20 17:45:52.757798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:25.656 [2024-11-20 17:45:52.757815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:25.656 [2024-11-20 17:45:52.758094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:25.656 [2024-11-20 17:45:52.758289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:25.656 [2024-11-20 17:45:52.758306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:25.656 [2024-11-20 17:45:52.758471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.656 "name": "raid_bdev1", 00:11:25.656 "uuid": "de290560-8135-43b1-9a26-fa92f2876544", 00:11:25.656 "strip_size_kb": 64, 00:11:25.656 "state": "online", 00:11:25.656 "raid_level": "raid0", 00:11:25.656 "superblock": true, 00:11:25.656 "num_base_bdevs": 4, 00:11:25.656 "num_base_bdevs_discovered": 4, 00:11:25.656 "num_base_bdevs_operational": 4, 00:11:25.656 "base_bdevs_list": [ 00:11:25.656 { 00:11:25.656 "name": "BaseBdev1", 00:11:25.656 "uuid": "7fe78d68-5c5c-5679-a871-dfdb297663ce", 00:11:25.656 "is_configured": true, 00:11:25.656 "data_offset": 2048, 00:11:25.656 "data_size": 63488 00:11:25.656 }, 00:11:25.656 { 00:11:25.656 "name": "BaseBdev2", 00:11:25.656 "uuid": "aaf9e70d-cd11-5d95-9c21-080c2cc2442a", 00:11:25.656 "is_configured": true, 00:11:25.656 "data_offset": 2048, 00:11:25.656 "data_size": 63488 00:11:25.656 }, 00:11:25.656 { 00:11:25.656 "name": "BaseBdev3", 00:11:25.656 "uuid": "0e95a10e-a8de-50dc-a915-3d2e344a27c2", 00:11:25.656 "is_configured": true, 00:11:25.656 "data_offset": 2048, 00:11:25.656 "data_size": 63488 00:11:25.656 }, 00:11:25.656 { 00:11:25.656 "name": "BaseBdev4", 00:11:25.656 "uuid": "830c27b2-7e3c-5f38-b365-29f94247a59c", 00:11:25.656 "is_configured": true, 00:11:25.656 "data_offset": 2048, 00:11:25.656 "data_size": 63488 00:11:25.656 } 00:11:25.656 ] 00:11:25.656 }' 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.656 17:45:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 17:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.239 17:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.239 [2024-11-20 17:45:53.231862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.178 "name": "raid_bdev1", 00:11:27.178 "uuid": "de290560-8135-43b1-9a26-fa92f2876544", 00:11:27.178 "strip_size_kb": 64, 00:11:27.178 "state": "online", 00:11:27.178 "raid_level": "raid0", 00:11:27.178 "superblock": true, 00:11:27.178 "num_base_bdevs": 4, 00:11:27.178 "num_base_bdevs_discovered": 4, 00:11:27.178 "num_base_bdevs_operational": 4, 00:11:27.178 "base_bdevs_list": [ 00:11:27.178 { 00:11:27.178 "name": "BaseBdev1", 00:11:27.178 "uuid": "7fe78d68-5c5c-5679-a871-dfdb297663ce", 00:11:27.178 "is_configured": true, 00:11:27.178 "data_offset": 2048, 00:11:27.178 "data_size": 63488 00:11:27.178 }, 00:11:27.178 { 00:11:27.178 "name": "BaseBdev2", 00:11:27.178 "uuid": "aaf9e70d-cd11-5d95-9c21-080c2cc2442a", 00:11:27.178 "is_configured": true, 00:11:27.178 "data_offset": 2048, 00:11:27.178 "data_size": 63488 00:11:27.178 }, 00:11:27.178 { 00:11:27.178 "name": "BaseBdev3", 00:11:27.178 "uuid": "0e95a10e-a8de-50dc-a915-3d2e344a27c2", 00:11:27.178 "is_configured": true, 00:11:27.178 "data_offset": 2048, 00:11:27.178 "data_size": 63488 00:11:27.178 }, 00:11:27.178 { 00:11:27.178 "name": "BaseBdev4", 00:11:27.178 "uuid": "830c27b2-7e3c-5f38-b365-29f94247a59c", 00:11:27.178 "is_configured": true, 00:11:27.178 "data_offset": 2048, 00:11:27.178 "data_size": 63488 00:11:27.178 } 00:11:27.178 ] 00:11:27.178 }' 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.178 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.438 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.438 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.438 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.438 [2024-11-20 17:45:54.608889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.438 [2024-11-20 17:45:54.608947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.438 [2024-11-20 17:45:54.611679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.438 [2024-11-20 17:45:54.611735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.438 [2024-11-20 17:45:54.611783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.438 [2024-11-20 17:45:54.611796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:27.698 { 00:11:27.698 "results": [ 00:11:27.698 { 00:11:27.698 "job": "raid_bdev1", 00:11:27.698 "core_mask": "0x1", 00:11:27.698 "workload": "randrw", 00:11:27.698 "percentage": 50, 00:11:27.698 "status": "finished", 00:11:27.698 "queue_depth": 1, 00:11:27.698 "io_size": 131072, 00:11:27.698 "runtime": 1.377406, 00:11:27.698 "iops": 13580.60005546658, 00:11:27.698 "mibps": 1697.5750069333226, 00:11:27.698 "io_failed": 1, 00:11:27.698 "io_timeout": 0, 00:11:27.698 "avg_latency_us": 103.74746356301719, 00:11:27.698 "min_latency_us": 26.606113537117903, 00:11:27.698 "max_latency_us": 1387.989519650655 00:11:27.698 } 00:11:27.698 ], 00:11:27.698 "core_count": 1 00:11:27.698 } 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71410 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71410 ']' 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71410 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71410 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71410' 00:11:27.698 killing process with pid 71410 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71410 00:11:27.698 [2024-11-20 17:45:54.656600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.698 17:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71410 00:11:27.958 [2024-11-20 17:45:55.020772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9YoKe4B9dO 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:29.371 00:11:29.371 real 0m4.897s 00:11:29.371 user 0m5.610s 00:11:29.371 sys 0m0.692s 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.371 17:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.371 ************************************ 00:11:29.371 END TEST raid_read_error_test 00:11:29.371 ************************************ 00:11:29.371 17:45:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:29.371 17:45:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.371 17:45:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.371 17:45:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.371 ************************************ 00:11:29.371 START TEST raid_write_error_test 00:11:29.371 ************************************ 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.371 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EqDIHUOPph 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71561 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71561 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71561 ']' 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.372 17:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.372 [2024-11-20 17:45:56.541468] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:29.372 [2024-11-20 17:45:56.541719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71561 ] 00:11:29.635 [2024-11-20 17:45:56.721157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.895 [2024-11-20 17:45:56.863787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.155 [2024-11-20 17:45:57.095701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.155 [2024-11-20 17:45:57.095748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.415 BaseBdev1_malloc 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.415 true 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.415 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.415 [2024-11-20 17:45:57.450809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.415 [2024-11-20 17:45:57.450970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.415 [2024-11-20 17:45:57.450995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:30.415 [2024-11-20 17:45:57.451019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.415 [2024-11-20 17:45:57.453551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.416 [2024-11-20 17:45:57.453595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.416 BaseBdev1 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.416 BaseBdev2_malloc 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.416 true 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.416 [2024-11-20 17:45:57.527032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.416 [2024-11-20 17:45:57.527105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.416 [2024-11-20 17:45:57.527125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.416 [2024-11-20 17:45:57.527137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.416 [2024-11-20 17:45:57.529614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.416 [2024-11-20 17:45:57.529656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.416 BaseBdev2 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.416 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.675 BaseBdev3_malloc 00:11:30.675 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.675 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.675 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 true 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 [2024-11-20 17:45:57.617915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.676 [2024-11-20 17:45:57.618007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.676 [2024-11-20 17:45:57.618039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.676 [2024-11-20 17:45:57.618052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.676 [2024-11-20 17:45:57.620603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.676 [2024-11-20 17:45:57.620647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.676 BaseBdev3 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 BaseBdev4_malloc 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 true 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 [2024-11-20 17:45:57.691360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.676 [2024-11-20 17:45:57.691430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.676 [2024-11-20 17:45:57.691450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.676 [2024-11-20 17:45:57.691463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.676 [2024-11-20 17:45:57.693946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.676 [2024-11-20 17:45:57.694142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.676 BaseBdev4 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 [2024-11-20 17:45:57.699405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.676 [2024-11-20 17:45:57.701591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.676 [2024-11-20 17:45:57.701674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.676 [2024-11-20 17:45:57.701739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.676 [2024-11-20 17:45:57.701984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:30.676 [2024-11-20 17:45:57.702000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.676 [2024-11-20 17:45:57.702278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:30.676 [2024-11-20 17:45:57.702453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:30.676 [2024-11-20 17:45:57.702465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:30.676 [2024-11-20 17:45:57.702615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.676 "name": "raid_bdev1", 00:11:30.676 "uuid": "5ca22c4f-2bc8-4e2d-8df1-813d84103519", 00:11:30.676 "strip_size_kb": 64, 00:11:30.676 "state": "online", 00:11:30.676 "raid_level": "raid0", 00:11:30.676 "superblock": true, 00:11:30.676 "num_base_bdevs": 4, 00:11:30.676 "num_base_bdevs_discovered": 4, 00:11:30.676 "num_base_bdevs_operational": 4, 00:11:30.676 "base_bdevs_list": [ 00:11:30.676 { 00:11:30.676 "name": "BaseBdev1", 00:11:30.676 "uuid": "f4956f2f-df34-5561-a61a-5df50d8aa4df", 00:11:30.676 "is_configured": true, 00:11:30.676 "data_offset": 2048, 00:11:30.676 "data_size": 63488 00:11:30.676 }, 00:11:30.676 { 00:11:30.676 "name": "BaseBdev2", 00:11:30.676 "uuid": "67658a20-6103-5b7c-844f-8c3fae93a727", 00:11:30.676 "is_configured": true, 00:11:30.676 "data_offset": 2048, 00:11:30.676 "data_size": 63488 00:11:30.676 }, 00:11:30.676 { 00:11:30.676 "name": "BaseBdev3", 00:11:30.676 "uuid": "17ed36aa-492f-5eea-87dd-002b415f8b6c", 00:11:30.676 "is_configured": true, 00:11:30.676 "data_offset": 2048, 00:11:30.676 "data_size": 63488 00:11:30.676 }, 00:11:30.676 { 00:11:30.676 "name": "BaseBdev4", 00:11:30.676 "uuid": "b3c01070-0c06-53c6-af05-b95019e38b39", 00:11:30.676 "is_configured": true, 00:11:30.676 "data_offset": 2048, 00:11:30.676 "data_size": 63488 00:11:30.676 } 00:11:30.676 ] 00:11:30.676 }' 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.676 17:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.245 17:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:31.245 17:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:31.245 [2024-11-20 17:45:58.264018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:32.183 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:32.183 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.184 "name": "raid_bdev1", 00:11:32.184 "uuid": "5ca22c4f-2bc8-4e2d-8df1-813d84103519", 00:11:32.184 "strip_size_kb": 64, 00:11:32.184 "state": "online", 00:11:32.184 "raid_level": "raid0", 00:11:32.184 "superblock": true, 00:11:32.184 "num_base_bdevs": 4, 00:11:32.184 "num_base_bdevs_discovered": 4, 00:11:32.184 "num_base_bdevs_operational": 4, 00:11:32.184 "base_bdevs_list": [ 00:11:32.184 { 00:11:32.184 "name": "BaseBdev1", 00:11:32.184 "uuid": "f4956f2f-df34-5561-a61a-5df50d8aa4df", 00:11:32.184 "is_configured": true, 00:11:32.184 "data_offset": 2048, 00:11:32.184 "data_size": 63488 00:11:32.184 }, 00:11:32.184 { 00:11:32.184 "name": "BaseBdev2", 00:11:32.184 "uuid": "67658a20-6103-5b7c-844f-8c3fae93a727", 00:11:32.184 "is_configured": true, 00:11:32.184 "data_offset": 2048, 00:11:32.184 "data_size": 63488 00:11:32.184 }, 00:11:32.184 { 00:11:32.184 "name": "BaseBdev3", 00:11:32.184 "uuid": "17ed36aa-492f-5eea-87dd-002b415f8b6c", 00:11:32.184 "is_configured": true, 00:11:32.184 "data_offset": 2048, 00:11:32.184 "data_size": 63488 00:11:32.184 }, 00:11:32.184 { 00:11:32.184 "name": "BaseBdev4", 00:11:32.184 "uuid": "b3c01070-0c06-53c6-af05-b95019e38b39", 00:11:32.184 "is_configured": true, 00:11:32.184 "data_offset": 2048, 00:11:32.184 "data_size": 63488 00:11:32.184 } 00:11:32.184 ] 00:11:32.184 }' 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.184 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.752 [2024-11-20 17:45:59.633031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.752 [2024-11-20 17:45:59.633197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.752 [2024-11-20 17:45:59.636064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.752 [2024-11-20 17:45:59.636168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.752 [2024-11-20 17:45:59.636253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.752 [2024-11-20 17:45:59.636346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.752 { 00:11:32.752 "results": [ 00:11:32.752 { 00:11:32.752 "job": "raid_bdev1", 00:11:32.752 "core_mask": "0x1", 00:11:32.752 "workload": "randrw", 00:11:32.752 "percentage": 50, 00:11:32.752 "status": "finished", 00:11:32.752 "queue_depth": 1, 00:11:32.752 "io_size": 131072, 00:11:32.752 "runtime": 1.369395, 00:11:32.752 "iops": 13121.122831615421, 00:11:32.752 "mibps": 1640.1403539519276, 00:11:32.752 "io_failed": 1, 00:11:32.752 "io_timeout": 0, 00:11:32.752 "avg_latency_us": 107.33433985410583, 00:11:32.752 "min_latency_us": 26.829694323144103, 00:11:32.752 "max_latency_us": 1452.380786026201 00:11:32.752 } 00:11:32.752 ], 00:11:32.752 "core_count": 1 00:11:32.752 } 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71561 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71561 ']' 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71561 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71561 00:11:32.752 killing process with pid 71561 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71561' 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71561 00:11:32.752 [2024-11-20 17:45:59.683715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.752 17:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71561 00:11:33.012 [2024-11-20 17:46:00.046948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EqDIHUOPph 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.393 ************************************ 00:11:34.393 END TEST raid_write_error_test 00:11:34.393 ************************************ 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:34.393 00:11:34.393 real 0m4.969s 00:11:34.393 user 0m5.746s 00:11:34.393 sys 0m0.713s 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.393 17:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.393 17:46:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:34.393 17:46:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:34.393 17:46:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.393 17:46:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.393 17:46:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.393 ************************************ 00:11:34.393 START TEST raid_state_function_test 00:11:34.393 ************************************ 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.393 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:34.394 Process raid pid: 71707 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71707 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71707' 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71707 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71707 ']' 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.394 17:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.394 [2024-11-20 17:46:01.565770] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:34.653 [2024-11-20 17:46:01.566330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.653 [2024-11-20 17:46:01.741224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.912 [2024-11-20 17:46:01.885566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.173 [2024-11-20 17:46:02.135379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.173 [2024-11-20 17:46:02.135442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.433 [2024-11-20 17:46:02.414919] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.433 [2024-11-20 17:46:02.415125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.433 [2024-11-20 17:46:02.415144] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.433 [2024-11-20 17:46:02.415156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.433 [2024-11-20 17:46:02.415163] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.433 [2024-11-20 17:46:02.415174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.433 [2024-11-20 17:46:02.415180] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.433 [2024-11-20 17:46:02.415189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.433 "name": "Existed_Raid", 00:11:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.433 "strip_size_kb": 64, 00:11:35.433 "state": "configuring", 00:11:35.433 "raid_level": "concat", 00:11:35.433 "superblock": false, 00:11:35.433 "num_base_bdevs": 4, 00:11:35.433 "num_base_bdevs_discovered": 0, 00:11:35.433 "num_base_bdevs_operational": 4, 00:11:35.433 "base_bdevs_list": [ 00:11:35.433 { 00:11:35.433 "name": "BaseBdev1", 00:11:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.433 "is_configured": false, 00:11:35.433 "data_offset": 0, 00:11:35.433 "data_size": 0 00:11:35.433 }, 00:11:35.433 { 00:11:35.433 "name": "BaseBdev2", 00:11:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.433 "is_configured": false, 00:11:35.433 "data_offset": 0, 00:11:35.433 "data_size": 0 00:11:35.433 }, 00:11:35.433 { 00:11:35.433 "name": "BaseBdev3", 00:11:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.433 "is_configured": false, 00:11:35.433 "data_offset": 0, 00:11:35.433 "data_size": 0 00:11:35.433 }, 00:11:35.433 { 00:11:35.433 "name": "BaseBdev4", 00:11:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.433 "is_configured": false, 00:11:35.433 "data_offset": 0, 00:11:35.433 "data_size": 0 00:11:35.433 } 00:11:35.433 ] 00:11:35.433 }' 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.433 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 [2024-11-20 17:46:02.874111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.001 [2024-11-20 17:46:02.874265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 [2024-11-20 17:46:02.886091] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.001 [2024-11-20 17:46:02.886205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.001 [2024-11-20 17:46:02.886238] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.001 [2024-11-20 17:46:02.886268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.001 [2024-11-20 17:46:02.886312] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.001 [2024-11-20 17:46:02.886337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.001 [2024-11-20 17:46:02.886411] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.001 [2024-11-20 17:46:02.886437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 [2024-11-20 17:46:02.941690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.001 BaseBdev1 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 [ 00:11:36.001 { 00:11:36.001 "name": "BaseBdev1", 00:11:36.001 "aliases": [ 00:11:36.001 "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645" 00:11:36.001 ], 00:11:36.001 "product_name": "Malloc disk", 00:11:36.001 "block_size": 512, 00:11:36.001 "num_blocks": 65536, 00:11:36.001 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:36.001 "assigned_rate_limits": { 00:11:36.001 "rw_ios_per_sec": 0, 00:11:36.001 "rw_mbytes_per_sec": 0, 00:11:36.001 "r_mbytes_per_sec": 0, 00:11:36.001 "w_mbytes_per_sec": 0 00:11:36.001 }, 00:11:36.001 "claimed": true, 00:11:36.001 "claim_type": "exclusive_write", 00:11:36.001 "zoned": false, 00:11:36.001 "supported_io_types": { 00:11:36.001 "read": true, 00:11:36.001 "write": true, 00:11:36.001 "unmap": true, 00:11:36.001 "flush": true, 00:11:36.001 "reset": true, 00:11:36.001 "nvme_admin": false, 00:11:36.001 "nvme_io": false, 00:11:36.001 "nvme_io_md": false, 00:11:36.001 "write_zeroes": true, 00:11:36.001 "zcopy": true, 00:11:36.001 "get_zone_info": false, 00:11:36.001 "zone_management": false, 00:11:36.001 "zone_append": false, 00:11:36.001 "compare": false, 00:11:36.001 "compare_and_write": false, 00:11:36.001 "abort": true, 00:11:36.001 "seek_hole": false, 00:11:36.001 "seek_data": false, 00:11:36.001 "copy": true, 00:11:36.001 "nvme_iov_md": false 00:11:36.001 }, 00:11:36.001 "memory_domains": [ 00:11:36.001 { 00:11:36.001 "dma_device_id": "system", 00:11:36.001 "dma_device_type": 1 00:11:36.001 }, 00:11:36.001 { 00:11:36.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.001 "dma_device_type": 2 00:11:36.001 } 00:11:36.001 ], 00:11:36.001 "driver_specific": {} 00:11:36.001 } 00:11:36.001 ] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.001 17:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.001 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.001 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.001 "name": "Existed_Raid", 00:11:36.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.001 "strip_size_kb": 64, 00:11:36.001 "state": "configuring", 00:11:36.002 "raid_level": "concat", 00:11:36.002 "superblock": false, 00:11:36.002 "num_base_bdevs": 4, 00:11:36.002 "num_base_bdevs_discovered": 1, 00:11:36.002 "num_base_bdevs_operational": 4, 00:11:36.002 "base_bdevs_list": [ 00:11:36.002 { 00:11:36.002 "name": "BaseBdev1", 00:11:36.002 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:36.002 "is_configured": true, 00:11:36.002 "data_offset": 0, 00:11:36.002 "data_size": 65536 00:11:36.002 }, 00:11:36.002 { 00:11:36.002 "name": "BaseBdev2", 00:11:36.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.002 "is_configured": false, 00:11:36.002 "data_offset": 0, 00:11:36.002 "data_size": 0 00:11:36.002 }, 00:11:36.002 { 00:11:36.002 "name": "BaseBdev3", 00:11:36.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.002 "is_configured": false, 00:11:36.002 "data_offset": 0, 00:11:36.002 "data_size": 0 00:11:36.002 }, 00:11:36.002 { 00:11:36.002 "name": "BaseBdev4", 00:11:36.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.002 "is_configured": false, 00:11:36.002 "data_offset": 0, 00:11:36.002 "data_size": 0 00:11:36.002 } 00:11:36.002 ] 00:11:36.002 }' 00:11:36.002 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.002 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.576 [2024-11-20 17:46:03.464959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.576 [2024-11-20 17:46:03.465154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.576 [2024-11-20 17:46:03.476981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.576 [2024-11-20 17:46:03.479247] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.576 [2024-11-20 17:46:03.479349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.576 [2024-11-20 17:46:03.479365] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.576 [2024-11-20 17:46:03.479377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.576 [2024-11-20 17:46:03.479384] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.576 [2024-11-20 17:46:03.479393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.576 "name": "Existed_Raid", 00:11:36.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.576 "strip_size_kb": 64, 00:11:36.576 "state": "configuring", 00:11:36.576 "raid_level": "concat", 00:11:36.576 "superblock": false, 00:11:36.576 "num_base_bdevs": 4, 00:11:36.576 "num_base_bdevs_discovered": 1, 00:11:36.576 "num_base_bdevs_operational": 4, 00:11:36.576 "base_bdevs_list": [ 00:11:36.576 { 00:11:36.576 "name": "BaseBdev1", 00:11:36.576 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:36.576 "is_configured": true, 00:11:36.576 "data_offset": 0, 00:11:36.576 "data_size": 65536 00:11:36.576 }, 00:11:36.576 { 00:11:36.576 "name": "BaseBdev2", 00:11:36.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.576 "is_configured": false, 00:11:36.576 "data_offset": 0, 00:11:36.576 "data_size": 0 00:11:36.576 }, 00:11:36.576 { 00:11:36.576 "name": "BaseBdev3", 00:11:36.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.576 "is_configured": false, 00:11:36.576 "data_offset": 0, 00:11:36.576 "data_size": 0 00:11:36.576 }, 00:11:36.576 { 00:11:36.576 "name": "BaseBdev4", 00:11:36.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.576 "is_configured": false, 00:11:36.576 "data_offset": 0, 00:11:36.576 "data_size": 0 00:11:36.576 } 00:11:36.576 ] 00:11:36.576 }' 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.576 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 [2024-11-20 17:46:03.988876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.844 BaseBdev2 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 17:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.844 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 [ 00:11:37.104 { 00:11:37.104 "name": "BaseBdev2", 00:11:37.104 "aliases": [ 00:11:37.104 "e3c6084f-51b6-448c-89d9-b66a08a57da3" 00:11:37.104 ], 00:11:37.104 "product_name": "Malloc disk", 00:11:37.104 "block_size": 512, 00:11:37.104 "num_blocks": 65536, 00:11:37.104 "uuid": "e3c6084f-51b6-448c-89d9-b66a08a57da3", 00:11:37.104 "assigned_rate_limits": { 00:11:37.104 "rw_ios_per_sec": 0, 00:11:37.104 "rw_mbytes_per_sec": 0, 00:11:37.104 "r_mbytes_per_sec": 0, 00:11:37.104 "w_mbytes_per_sec": 0 00:11:37.104 }, 00:11:37.104 "claimed": true, 00:11:37.104 "claim_type": "exclusive_write", 00:11:37.104 "zoned": false, 00:11:37.104 "supported_io_types": { 00:11:37.104 "read": true, 00:11:37.104 "write": true, 00:11:37.104 "unmap": true, 00:11:37.104 "flush": true, 00:11:37.104 "reset": true, 00:11:37.104 "nvme_admin": false, 00:11:37.104 "nvme_io": false, 00:11:37.104 "nvme_io_md": false, 00:11:37.104 "write_zeroes": true, 00:11:37.104 "zcopy": true, 00:11:37.104 "get_zone_info": false, 00:11:37.104 "zone_management": false, 00:11:37.104 "zone_append": false, 00:11:37.104 "compare": false, 00:11:37.104 "compare_and_write": false, 00:11:37.104 "abort": true, 00:11:37.104 "seek_hole": false, 00:11:37.104 "seek_data": false, 00:11:37.104 "copy": true, 00:11:37.104 "nvme_iov_md": false 00:11:37.104 }, 00:11:37.104 "memory_domains": [ 00:11:37.104 { 00:11:37.104 "dma_device_id": "system", 00:11:37.104 "dma_device_type": 1 00:11:37.104 }, 00:11:37.104 { 00:11:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.104 "dma_device_type": 2 00:11:37.104 } 00:11:37.104 ], 00:11:37.104 "driver_specific": {} 00:11:37.104 } 00:11:37.104 ] 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.104 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.105 "name": "Existed_Raid", 00:11:37.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.105 "strip_size_kb": 64, 00:11:37.105 "state": "configuring", 00:11:37.105 "raid_level": "concat", 00:11:37.105 "superblock": false, 00:11:37.105 "num_base_bdevs": 4, 00:11:37.105 "num_base_bdevs_discovered": 2, 00:11:37.105 "num_base_bdevs_operational": 4, 00:11:37.105 "base_bdevs_list": [ 00:11:37.105 { 00:11:37.105 "name": "BaseBdev1", 00:11:37.105 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:37.105 "is_configured": true, 00:11:37.105 "data_offset": 0, 00:11:37.105 "data_size": 65536 00:11:37.105 }, 00:11:37.105 { 00:11:37.105 "name": "BaseBdev2", 00:11:37.105 "uuid": "e3c6084f-51b6-448c-89d9-b66a08a57da3", 00:11:37.105 "is_configured": true, 00:11:37.105 "data_offset": 0, 00:11:37.105 "data_size": 65536 00:11:37.105 }, 00:11:37.105 { 00:11:37.105 "name": "BaseBdev3", 00:11:37.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.105 "is_configured": false, 00:11:37.105 "data_offset": 0, 00:11:37.105 "data_size": 0 00:11:37.105 }, 00:11:37.105 { 00:11:37.105 "name": "BaseBdev4", 00:11:37.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.105 "is_configured": false, 00:11:37.105 "data_offset": 0, 00:11:37.105 "data_size": 0 00:11:37.105 } 00:11:37.105 ] 00:11:37.105 }' 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.105 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 [2024-11-20 17:46:04.472439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.364 BaseBdev3 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.364 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.365 [ 00:11:37.365 { 00:11:37.365 "name": "BaseBdev3", 00:11:37.365 "aliases": [ 00:11:37.365 "f441cb36-2946-48d7-a1e6-9e4493edbae0" 00:11:37.365 ], 00:11:37.365 "product_name": "Malloc disk", 00:11:37.365 "block_size": 512, 00:11:37.365 "num_blocks": 65536, 00:11:37.365 "uuid": "f441cb36-2946-48d7-a1e6-9e4493edbae0", 00:11:37.365 "assigned_rate_limits": { 00:11:37.365 "rw_ios_per_sec": 0, 00:11:37.365 "rw_mbytes_per_sec": 0, 00:11:37.365 "r_mbytes_per_sec": 0, 00:11:37.365 "w_mbytes_per_sec": 0 00:11:37.365 }, 00:11:37.365 "claimed": true, 00:11:37.365 "claim_type": "exclusive_write", 00:11:37.365 "zoned": false, 00:11:37.365 "supported_io_types": { 00:11:37.365 "read": true, 00:11:37.365 "write": true, 00:11:37.365 "unmap": true, 00:11:37.365 "flush": true, 00:11:37.365 "reset": true, 00:11:37.365 "nvme_admin": false, 00:11:37.365 "nvme_io": false, 00:11:37.365 "nvme_io_md": false, 00:11:37.365 "write_zeroes": true, 00:11:37.365 "zcopy": true, 00:11:37.365 "get_zone_info": false, 00:11:37.365 "zone_management": false, 00:11:37.365 "zone_append": false, 00:11:37.365 "compare": false, 00:11:37.365 "compare_and_write": false, 00:11:37.365 "abort": true, 00:11:37.365 "seek_hole": false, 00:11:37.365 "seek_data": false, 00:11:37.365 "copy": true, 00:11:37.365 "nvme_iov_md": false 00:11:37.365 }, 00:11:37.365 "memory_domains": [ 00:11:37.365 { 00:11:37.365 "dma_device_id": "system", 00:11:37.365 "dma_device_type": 1 00:11:37.365 }, 00:11:37.365 { 00:11:37.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.365 "dma_device_type": 2 00:11:37.365 } 00:11:37.365 ], 00:11:37.365 "driver_specific": {} 00:11:37.365 } 00:11:37.365 ] 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.365 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.625 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.625 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.625 "name": "Existed_Raid", 00:11:37.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.625 "strip_size_kb": 64, 00:11:37.625 "state": "configuring", 00:11:37.625 "raid_level": "concat", 00:11:37.625 "superblock": false, 00:11:37.625 "num_base_bdevs": 4, 00:11:37.625 "num_base_bdevs_discovered": 3, 00:11:37.625 "num_base_bdevs_operational": 4, 00:11:37.625 "base_bdevs_list": [ 00:11:37.625 { 00:11:37.625 "name": "BaseBdev1", 00:11:37.625 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:37.625 "is_configured": true, 00:11:37.625 "data_offset": 0, 00:11:37.625 "data_size": 65536 00:11:37.625 }, 00:11:37.625 { 00:11:37.625 "name": "BaseBdev2", 00:11:37.625 "uuid": "e3c6084f-51b6-448c-89d9-b66a08a57da3", 00:11:37.625 "is_configured": true, 00:11:37.625 "data_offset": 0, 00:11:37.625 "data_size": 65536 00:11:37.625 }, 00:11:37.625 { 00:11:37.625 "name": "BaseBdev3", 00:11:37.625 "uuid": "f441cb36-2946-48d7-a1e6-9e4493edbae0", 00:11:37.625 "is_configured": true, 00:11:37.625 "data_offset": 0, 00:11:37.625 "data_size": 65536 00:11:37.625 }, 00:11:37.625 { 00:11:37.625 "name": "BaseBdev4", 00:11:37.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.626 "is_configured": false, 00:11:37.626 "data_offset": 0, 00:11:37.626 "data_size": 0 00:11:37.626 } 00:11:37.626 ] 00:11:37.626 }' 00:11:37.626 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.626 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.885 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.885 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.885 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.885 [2024-11-20 17:46:04.974172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.885 [2024-11-20 17:46:04.974238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:37.885 [2024-11-20 17:46:04.974248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:37.885 [2024-11-20 17:46:04.974564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:37.885 [2024-11-20 17:46:04.974759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:37.885 [2024-11-20 17:46:04.974782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:37.885 [2024-11-20 17:46:04.975102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.885 BaseBdev4 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 17:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 [ 00:11:37.886 { 00:11:37.886 "name": "BaseBdev4", 00:11:37.886 "aliases": [ 00:11:37.886 "c8cfb37d-37ed-4ccc-8fd7-bbc7f6f56e28" 00:11:37.886 ], 00:11:37.886 "product_name": "Malloc disk", 00:11:37.886 "block_size": 512, 00:11:37.886 "num_blocks": 65536, 00:11:37.886 "uuid": "c8cfb37d-37ed-4ccc-8fd7-bbc7f6f56e28", 00:11:37.886 "assigned_rate_limits": { 00:11:37.886 "rw_ios_per_sec": 0, 00:11:37.886 "rw_mbytes_per_sec": 0, 00:11:37.886 "r_mbytes_per_sec": 0, 00:11:37.886 "w_mbytes_per_sec": 0 00:11:37.886 }, 00:11:37.886 "claimed": true, 00:11:37.886 "claim_type": "exclusive_write", 00:11:37.886 "zoned": false, 00:11:37.886 "supported_io_types": { 00:11:37.886 "read": true, 00:11:37.886 "write": true, 00:11:37.886 "unmap": true, 00:11:37.886 "flush": true, 00:11:37.886 "reset": true, 00:11:37.886 "nvme_admin": false, 00:11:37.886 "nvme_io": false, 00:11:37.886 "nvme_io_md": false, 00:11:37.886 "write_zeroes": true, 00:11:37.886 "zcopy": true, 00:11:37.886 "get_zone_info": false, 00:11:37.886 "zone_management": false, 00:11:37.886 "zone_append": false, 00:11:37.886 "compare": false, 00:11:37.886 "compare_and_write": false, 00:11:37.886 "abort": true, 00:11:37.886 "seek_hole": false, 00:11:37.886 "seek_data": false, 00:11:37.886 "copy": true, 00:11:37.886 "nvme_iov_md": false 00:11:37.886 }, 00:11:37.886 "memory_domains": [ 00:11:37.886 { 00:11:37.886 "dma_device_id": "system", 00:11:37.886 "dma_device_type": 1 00:11:37.886 }, 00:11:37.886 { 00:11:37.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.886 "dma_device_type": 2 00:11:37.886 } 00:11:37.886 ], 00:11:37.886 "driver_specific": {} 00:11:37.886 } 00:11:37.886 ] 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.146 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.146 "name": "Existed_Raid", 00:11:38.146 "uuid": "5e6821d8-d5de-4aa4-836e-3425f3fffe21", 00:11:38.146 "strip_size_kb": 64, 00:11:38.146 "state": "online", 00:11:38.146 "raid_level": "concat", 00:11:38.146 "superblock": false, 00:11:38.146 "num_base_bdevs": 4, 00:11:38.146 "num_base_bdevs_discovered": 4, 00:11:38.146 "num_base_bdevs_operational": 4, 00:11:38.146 "base_bdevs_list": [ 00:11:38.146 { 00:11:38.146 "name": "BaseBdev1", 00:11:38.146 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:38.146 "is_configured": true, 00:11:38.146 "data_offset": 0, 00:11:38.146 "data_size": 65536 00:11:38.146 }, 00:11:38.146 { 00:11:38.146 "name": "BaseBdev2", 00:11:38.146 "uuid": "e3c6084f-51b6-448c-89d9-b66a08a57da3", 00:11:38.146 "is_configured": true, 00:11:38.146 "data_offset": 0, 00:11:38.146 "data_size": 65536 00:11:38.146 }, 00:11:38.146 { 00:11:38.146 "name": "BaseBdev3", 00:11:38.146 "uuid": "f441cb36-2946-48d7-a1e6-9e4493edbae0", 00:11:38.146 "is_configured": true, 00:11:38.146 "data_offset": 0, 00:11:38.146 "data_size": 65536 00:11:38.146 }, 00:11:38.146 { 00:11:38.146 "name": "BaseBdev4", 00:11:38.146 "uuid": "c8cfb37d-37ed-4ccc-8fd7-bbc7f6f56e28", 00:11:38.146 "is_configured": true, 00:11:38.146 "data_offset": 0, 00:11:38.146 "data_size": 65536 00:11:38.146 } 00:11:38.146 ] 00:11:38.146 }' 00:11:38.146 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.146 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 [2024-11-20 17:46:05.477802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.406 "name": "Existed_Raid", 00:11:38.406 "aliases": [ 00:11:38.406 "5e6821d8-d5de-4aa4-836e-3425f3fffe21" 00:11:38.406 ], 00:11:38.406 "product_name": "Raid Volume", 00:11:38.406 "block_size": 512, 00:11:38.406 "num_blocks": 262144, 00:11:38.406 "uuid": "5e6821d8-d5de-4aa4-836e-3425f3fffe21", 00:11:38.406 "assigned_rate_limits": { 00:11:38.406 "rw_ios_per_sec": 0, 00:11:38.406 "rw_mbytes_per_sec": 0, 00:11:38.406 "r_mbytes_per_sec": 0, 00:11:38.406 "w_mbytes_per_sec": 0 00:11:38.406 }, 00:11:38.406 "claimed": false, 00:11:38.406 "zoned": false, 00:11:38.406 "supported_io_types": { 00:11:38.406 "read": true, 00:11:38.406 "write": true, 00:11:38.406 "unmap": true, 00:11:38.406 "flush": true, 00:11:38.406 "reset": true, 00:11:38.406 "nvme_admin": false, 00:11:38.406 "nvme_io": false, 00:11:38.406 "nvme_io_md": false, 00:11:38.406 "write_zeroes": true, 00:11:38.406 "zcopy": false, 00:11:38.406 "get_zone_info": false, 00:11:38.406 "zone_management": false, 00:11:38.406 "zone_append": false, 00:11:38.406 "compare": false, 00:11:38.406 "compare_and_write": false, 00:11:38.406 "abort": false, 00:11:38.406 "seek_hole": false, 00:11:38.406 "seek_data": false, 00:11:38.406 "copy": false, 00:11:38.406 "nvme_iov_md": false 00:11:38.406 }, 00:11:38.406 "memory_domains": [ 00:11:38.406 { 00:11:38.406 "dma_device_id": "system", 00:11:38.406 "dma_device_type": 1 00:11:38.406 }, 00:11:38.406 { 00:11:38.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.406 "dma_device_type": 2 00:11:38.406 }, 00:11:38.406 { 00:11:38.406 "dma_device_id": "system", 00:11:38.406 "dma_device_type": 1 00:11:38.406 }, 00:11:38.406 { 00:11:38.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.406 "dma_device_type": 2 00:11:38.406 }, 00:11:38.406 { 00:11:38.406 "dma_device_id": "system", 00:11:38.407 "dma_device_type": 1 00:11:38.407 }, 00:11:38.407 { 00:11:38.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.407 "dma_device_type": 2 00:11:38.407 }, 00:11:38.407 { 00:11:38.407 "dma_device_id": "system", 00:11:38.407 "dma_device_type": 1 00:11:38.407 }, 00:11:38.407 { 00:11:38.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.407 "dma_device_type": 2 00:11:38.407 } 00:11:38.407 ], 00:11:38.407 "driver_specific": { 00:11:38.407 "raid": { 00:11:38.407 "uuid": "5e6821d8-d5de-4aa4-836e-3425f3fffe21", 00:11:38.407 "strip_size_kb": 64, 00:11:38.407 "state": "online", 00:11:38.407 "raid_level": "concat", 00:11:38.407 "superblock": false, 00:11:38.407 "num_base_bdevs": 4, 00:11:38.407 "num_base_bdevs_discovered": 4, 00:11:38.407 "num_base_bdevs_operational": 4, 00:11:38.407 "base_bdevs_list": [ 00:11:38.407 { 00:11:38.407 "name": "BaseBdev1", 00:11:38.407 "uuid": "a57f453d-f5e2-4fa7-ac4d-f7abf02f6645", 00:11:38.407 "is_configured": true, 00:11:38.407 "data_offset": 0, 00:11:38.407 "data_size": 65536 00:11:38.407 }, 00:11:38.407 { 00:11:38.407 "name": "BaseBdev2", 00:11:38.407 "uuid": "e3c6084f-51b6-448c-89d9-b66a08a57da3", 00:11:38.407 "is_configured": true, 00:11:38.407 "data_offset": 0, 00:11:38.407 "data_size": 65536 00:11:38.407 }, 00:11:38.407 { 00:11:38.407 "name": "BaseBdev3", 00:11:38.407 "uuid": "f441cb36-2946-48d7-a1e6-9e4493edbae0", 00:11:38.407 "is_configured": true, 00:11:38.407 "data_offset": 0, 00:11:38.407 "data_size": 65536 00:11:38.407 }, 00:11:38.407 { 00:11:38.407 "name": "BaseBdev4", 00:11:38.407 "uuid": "c8cfb37d-37ed-4ccc-8fd7-bbc7f6f56e28", 00:11:38.407 "is_configured": true, 00:11:38.407 "data_offset": 0, 00:11:38.407 "data_size": 65536 00:11:38.407 } 00:11:38.407 ] 00:11:38.407 } 00:11:38.407 } 00:11:38.407 }' 00:11:38.407 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.407 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:38.407 BaseBdev2 00:11:38.407 BaseBdev3 00:11:38.407 BaseBdev4' 00:11:38.407 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.667 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.667 [2024-11-20 17:46:05.820945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.667 [2024-11-20 17:46:05.821083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.667 [2024-11-20 17:46:05.821185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.927 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.927 "name": "Existed_Raid", 00:11:38.927 "uuid": "5e6821d8-d5de-4aa4-836e-3425f3fffe21", 00:11:38.927 "strip_size_kb": 64, 00:11:38.927 "state": "offline", 00:11:38.927 "raid_level": "concat", 00:11:38.927 "superblock": false, 00:11:38.927 "num_base_bdevs": 4, 00:11:38.927 "num_base_bdevs_discovered": 3, 00:11:38.927 "num_base_bdevs_operational": 3, 00:11:38.927 "base_bdevs_list": [ 00:11:38.927 { 00:11:38.927 "name": null, 00:11:38.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.927 "is_configured": false, 00:11:38.927 "data_offset": 0, 00:11:38.927 "data_size": 65536 00:11:38.927 }, 00:11:38.927 { 00:11:38.927 "name": "BaseBdev2", 00:11:38.927 "uuid": "e3c6084f-51b6-448c-89d9-b66a08a57da3", 00:11:38.927 "is_configured": true, 00:11:38.927 "data_offset": 0, 00:11:38.927 "data_size": 65536 00:11:38.927 }, 00:11:38.927 { 00:11:38.928 "name": "BaseBdev3", 00:11:38.928 "uuid": "f441cb36-2946-48d7-a1e6-9e4493edbae0", 00:11:38.928 "is_configured": true, 00:11:38.928 "data_offset": 0, 00:11:38.928 "data_size": 65536 00:11:38.928 }, 00:11:38.928 { 00:11:38.928 "name": "BaseBdev4", 00:11:38.928 "uuid": "c8cfb37d-37ed-4ccc-8fd7-bbc7f6f56e28", 00:11:38.928 "is_configured": true, 00:11:38.928 "data_offset": 0, 00:11:38.928 "data_size": 65536 00:11:38.928 } 00:11:38.928 ] 00:11:38.928 }' 00:11:38.928 17:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.928 17:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.496 [2024-11-20 17:46:06.477133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.496 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.496 [2024-11-20 17:46:06.638788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.756 [2024-11-20 17:46:06.802029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:39.756 [2024-11-20 17:46:06.802198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.756 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.757 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:39.757 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.757 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.757 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.017 17:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.017 BaseBdev2 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.017 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 [ 00:11:40.018 { 00:11:40.018 "name": "BaseBdev2", 00:11:40.018 "aliases": [ 00:11:40.018 "764ae030-e8bb-42e5-a617-f51c56f0e26f" 00:11:40.018 ], 00:11:40.018 "product_name": "Malloc disk", 00:11:40.018 "block_size": 512, 00:11:40.018 "num_blocks": 65536, 00:11:40.018 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:40.018 "assigned_rate_limits": { 00:11:40.018 "rw_ios_per_sec": 0, 00:11:40.018 "rw_mbytes_per_sec": 0, 00:11:40.018 "r_mbytes_per_sec": 0, 00:11:40.018 "w_mbytes_per_sec": 0 00:11:40.018 }, 00:11:40.018 "claimed": false, 00:11:40.018 "zoned": false, 00:11:40.018 "supported_io_types": { 00:11:40.018 "read": true, 00:11:40.018 "write": true, 00:11:40.018 "unmap": true, 00:11:40.018 "flush": true, 00:11:40.018 "reset": true, 00:11:40.018 "nvme_admin": false, 00:11:40.018 "nvme_io": false, 00:11:40.018 "nvme_io_md": false, 00:11:40.018 "write_zeroes": true, 00:11:40.018 "zcopy": true, 00:11:40.018 "get_zone_info": false, 00:11:40.018 "zone_management": false, 00:11:40.018 "zone_append": false, 00:11:40.018 "compare": false, 00:11:40.018 "compare_and_write": false, 00:11:40.018 "abort": true, 00:11:40.018 "seek_hole": false, 00:11:40.018 "seek_data": false, 00:11:40.018 "copy": true, 00:11:40.018 "nvme_iov_md": false 00:11:40.018 }, 00:11:40.018 "memory_domains": [ 00:11:40.018 { 00:11:40.018 "dma_device_id": "system", 00:11:40.018 "dma_device_type": 1 00:11:40.018 }, 00:11:40.018 { 00:11:40.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.018 "dma_device_type": 2 00:11:40.018 } 00:11:40.018 ], 00:11:40.018 "driver_specific": {} 00:11:40.018 } 00:11:40.018 ] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 BaseBdev3 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 [ 00:11:40.018 { 00:11:40.018 "name": "BaseBdev3", 00:11:40.018 "aliases": [ 00:11:40.018 "0d6c1bf2-5c3f-4214-9642-1330f9309fbe" 00:11:40.018 ], 00:11:40.018 "product_name": "Malloc disk", 00:11:40.018 "block_size": 512, 00:11:40.018 "num_blocks": 65536, 00:11:40.018 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:40.018 "assigned_rate_limits": { 00:11:40.018 "rw_ios_per_sec": 0, 00:11:40.018 "rw_mbytes_per_sec": 0, 00:11:40.018 "r_mbytes_per_sec": 0, 00:11:40.018 "w_mbytes_per_sec": 0 00:11:40.018 }, 00:11:40.018 "claimed": false, 00:11:40.018 "zoned": false, 00:11:40.018 "supported_io_types": { 00:11:40.018 "read": true, 00:11:40.018 "write": true, 00:11:40.018 "unmap": true, 00:11:40.018 "flush": true, 00:11:40.018 "reset": true, 00:11:40.018 "nvme_admin": false, 00:11:40.018 "nvme_io": false, 00:11:40.018 "nvme_io_md": false, 00:11:40.018 "write_zeroes": true, 00:11:40.018 "zcopy": true, 00:11:40.018 "get_zone_info": false, 00:11:40.018 "zone_management": false, 00:11:40.018 "zone_append": false, 00:11:40.018 "compare": false, 00:11:40.018 "compare_and_write": false, 00:11:40.018 "abort": true, 00:11:40.018 "seek_hole": false, 00:11:40.018 "seek_data": false, 00:11:40.018 "copy": true, 00:11:40.018 "nvme_iov_md": false 00:11:40.018 }, 00:11:40.018 "memory_domains": [ 00:11:40.018 { 00:11:40.018 "dma_device_id": "system", 00:11:40.018 "dma_device_type": 1 00:11:40.018 }, 00:11:40.018 { 00:11:40.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.018 "dma_device_type": 2 00:11:40.018 } 00:11:40.018 ], 00:11:40.018 "driver_specific": {} 00:11:40.018 } 00:11:40.018 ] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 BaseBdev4 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.018 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.279 [ 00:11:40.279 { 00:11:40.279 "name": "BaseBdev4", 00:11:40.279 "aliases": [ 00:11:40.279 "c9330afe-46a3-48ba-93eb-7da00e16071b" 00:11:40.279 ], 00:11:40.279 "product_name": "Malloc disk", 00:11:40.279 "block_size": 512, 00:11:40.279 "num_blocks": 65536, 00:11:40.279 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:40.279 "assigned_rate_limits": { 00:11:40.279 "rw_ios_per_sec": 0, 00:11:40.279 "rw_mbytes_per_sec": 0, 00:11:40.279 "r_mbytes_per_sec": 0, 00:11:40.279 "w_mbytes_per_sec": 0 00:11:40.279 }, 00:11:40.279 "claimed": false, 00:11:40.279 "zoned": false, 00:11:40.279 "supported_io_types": { 00:11:40.279 "read": true, 00:11:40.279 "write": true, 00:11:40.279 "unmap": true, 00:11:40.279 "flush": true, 00:11:40.279 "reset": true, 00:11:40.279 "nvme_admin": false, 00:11:40.279 "nvme_io": false, 00:11:40.279 "nvme_io_md": false, 00:11:40.279 "write_zeroes": true, 00:11:40.279 "zcopy": true, 00:11:40.279 "get_zone_info": false, 00:11:40.279 "zone_management": false, 00:11:40.279 "zone_append": false, 00:11:40.279 "compare": false, 00:11:40.279 "compare_and_write": false, 00:11:40.279 "abort": true, 00:11:40.279 "seek_hole": false, 00:11:40.279 "seek_data": false, 00:11:40.279 "copy": true, 00:11:40.279 "nvme_iov_md": false 00:11:40.279 }, 00:11:40.279 "memory_domains": [ 00:11:40.279 { 00:11:40.279 "dma_device_id": "system", 00:11:40.279 "dma_device_type": 1 00:11:40.279 }, 00:11:40.279 { 00:11:40.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.279 "dma_device_type": 2 00:11:40.279 } 00:11:40.279 ], 00:11:40.279 "driver_specific": {} 00:11:40.279 } 00:11:40.279 ] 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.279 [2024-11-20 17:46:07.236979] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.279 [2024-11-20 17:46:07.237132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.279 [2024-11-20 17:46:07.237184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.279 [2024-11-20 17:46:07.239361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.279 [2024-11-20 17:46:07.239457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.279 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.279 "name": "Existed_Raid", 00:11:40.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.279 "strip_size_kb": 64, 00:11:40.279 "state": "configuring", 00:11:40.279 "raid_level": "concat", 00:11:40.279 "superblock": false, 00:11:40.279 "num_base_bdevs": 4, 00:11:40.279 "num_base_bdevs_discovered": 3, 00:11:40.279 "num_base_bdevs_operational": 4, 00:11:40.279 "base_bdevs_list": [ 00:11:40.279 { 00:11:40.279 "name": "BaseBdev1", 00:11:40.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.279 "is_configured": false, 00:11:40.279 "data_offset": 0, 00:11:40.279 "data_size": 0 00:11:40.279 }, 00:11:40.279 { 00:11:40.279 "name": "BaseBdev2", 00:11:40.279 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:40.279 "is_configured": true, 00:11:40.279 "data_offset": 0, 00:11:40.280 "data_size": 65536 00:11:40.280 }, 00:11:40.280 { 00:11:40.280 "name": "BaseBdev3", 00:11:40.280 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:40.280 "is_configured": true, 00:11:40.280 "data_offset": 0, 00:11:40.280 "data_size": 65536 00:11:40.280 }, 00:11:40.280 { 00:11:40.280 "name": "BaseBdev4", 00:11:40.280 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:40.280 "is_configured": true, 00:11:40.280 "data_offset": 0, 00:11:40.280 "data_size": 65536 00:11:40.280 } 00:11:40.280 ] 00:11:40.280 }' 00:11:40.280 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.280 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 [2024-11-20 17:46:07.740247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.848 "name": "Existed_Raid", 00:11:40.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.848 "strip_size_kb": 64, 00:11:40.848 "state": "configuring", 00:11:40.848 "raid_level": "concat", 00:11:40.848 "superblock": false, 00:11:40.848 "num_base_bdevs": 4, 00:11:40.848 "num_base_bdevs_discovered": 2, 00:11:40.848 "num_base_bdevs_operational": 4, 00:11:40.848 "base_bdevs_list": [ 00:11:40.848 { 00:11:40.848 "name": "BaseBdev1", 00:11:40.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.848 "is_configured": false, 00:11:40.848 "data_offset": 0, 00:11:40.848 "data_size": 0 00:11:40.848 }, 00:11:40.848 { 00:11:40.848 "name": null, 00:11:40.848 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:40.848 "is_configured": false, 00:11:40.848 "data_offset": 0, 00:11:40.848 "data_size": 65536 00:11:40.848 }, 00:11:40.848 { 00:11:40.848 "name": "BaseBdev3", 00:11:40.848 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:40.848 "is_configured": true, 00:11:40.848 "data_offset": 0, 00:11:40.848 "data_size": 65536 00:11:40.848 }, 00:11:40.848 { 00:11:40.848 "name": "BaseBdev4", 00:11:40.848 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:40.848 "is_configured": true, 00:11:40.848 "data_offset": 0, 00:11:40.848 "data_size": 65536 00:11:40.848 } 00:11:40.848 ] 00:11:40.848 }' 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.848 17:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.108 [2024-11-20 17:46:08.274500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.108 BaseBdev1 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.108 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.369 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.369 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.369 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.369 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.369 [ 00:11:41.369 { 00:11:41.369 "name": "BaseBdev1", 00:11:41.369 "aliases": [ 00:11:41.369 "92b06948-109a-485a-a95b-4dca3f3405a3" 00:11:41.369 ], 00:11:41.369 "product_name": "Malloc disk", 00:11:41.370 "block_size": 512, 00:11:41.370 "num_blocks": 65536, 00:11:41.370 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:41.370 "assigned_rate_limits": { 00:11:41.370 "rw_ios_per_sec": 0, 00:11:41.370 "rw_mbytes_per_sec": 0, 00:11:41.370 "r_mbytes_per_sec": 0, 00:11:41.370 "w_mbytes_per_sec": 0 00:11:41.370 }, 00:11:41.370 "claimed": true, 00:11:41.370 "claim_type": "exclusive_write", 00:11:41.370 "zoned": false, 00:11:41.370 "supported_io_types": { 00:11:41.370 "read": true, 00:11:41.370 "write": true, 00:11:41.370 "unmap": true, 00:11:41.370 "flush": true, 00:11:41.370 "reset": true, 00:11:41.370 "nvme_admin": false, 00:11:41.370 "nvme_io": false, 00:11:41.370 "nvme_io_md": false, 00:11:41.370 "write_zeroes": true, 00:11:41.370 "zcopy": true, 00:11:41.370 "get_zone_info": false, 00:11:41.370 "zone_management": false, 00:11:41.370 "zone_append": false, 00:11:41.370 "compare": false, 00:11:41.370 "compare_and_write": false, 00:11:41.370 "abort": true, 00:11:41.370 "seek_hole": false, 00:11:41.370 "seek_data": false, 00:11:41.370 "copy": true, 00:11:41.370 "nvme_iov_md": false 00:11:41.370 }, 00:11:41.370 "memory_domains": [ 00:11:41.370 { 00:11:41.370 "dma_device_id": "system", 00:11:41.370 "dma_device_type": 1 00:11:41.370 }, 00:11:41.370 { 00:11:41.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.370 "dma_device_type": 2 00:11:41.370 } 00:11:41.370 ], 00:11:41.370 "driver_specific": {} 00:11:41.370 } 00:11:41.370 ] 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.370 "name": "Existed_Raid", 00:11:41.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.370 "strip_size_kb": 64, 00:11:41.370 "state": "configuring", 00:11:41.370 "raid_level": "concat", 00:11:41.370 "superblock": false, 00:11:41.370 "num_base_bdevs": 4, 00:11:41.370 "num_base_bdevs_discovered": 3, 00:11:41.370 "num_base_bdevs_operational": 4, 00:11:41.370 "base_bdevs_list": [ 00:11:41.370 { 00:11:41.370 "name": "BaseBdev1", 00:11:41.370 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:41.370 "is_configured": true, 00:11:41.370 "data_offset": 0, 00:11:41.370 "data_size": 65536 00:11:41.370 }, 00:11:41.370 { 00:11:41.370 "name": null, 00:11:41.370 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:41.370 "is_configured": false, 00:11:41.370 "data_offset": 0, 00:11:41.370 "data_size": 65536 00:11:41.370 }, 00:11:41.370 { 00:11:41.370 "name": "BaseBdev3", 00:11:41.370 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:41.370 "is_configured": true, 00:11:41.370 "data_offset": 0, 00:11:41.370 "data_size": 65536 00:11:41.370 }, 00:11:41.370 { 00:11:41.370 "name": "BaseBdev4", 00:11:41.370 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:41.370 "is_configured": true, 00:11:41.370 "data_offset": 0, 00:11:41.370 "data_size": 65536 00:11:41.370 } 00:11:41.370 ] 00:11:41.370 }' 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.370 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.630 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.630 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.630 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.630 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.630 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.890 [2024-11-20 17:46:08.829725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.890 "name": "Existed_Raid", 00:11:41.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.890 "strip_size_kb": 64, 00:11:41.890 "state": "configuring", 00:11:41.890 "raid_level": "concat", 00:11:41.890 "superblock": false, 00:11:41.890 "num_base_bdevs": 4, 00:11:41.890 "num_base_bdevs_discovered": 2, 00:11:41.890 "num_base_bdevs_operational": 4, 00:11:41.890 "base_bdevs_list": [ 00:11:41.890 { 00:11:41.890 "name": "BaseBdev1", 00:11:41.890 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:41.890 "is_configured": true, 00:11:41.890 "data_offset": 0, 00:11:41.890 "data_size": 65536 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "name": null, 00:11:41.890 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:41.890 "is_configured": false, 00:11:41.890 "data_offset": 0, 00:11:41.890 "data_size": 65536 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "name": null, 00:11:41.890 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:41.890 "is_configured": false, 00:11:41.890 "data_offset": 0, 00:11:41.890 "data_size": 65536 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "name": "BaseBdev4", 00:11:41.890 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:41.890 "is_configured": true, 00:11:41.890 "data_offset": 0, 00:11:41.890 "data_size": 65536 00:11:41.890 } 00:11:41.890 ] 00:11:41.890 }' 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.890 17:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.150 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.150 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.150 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.150 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.150 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.410 [2024-11-20 17:46:09.340811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.410 "name": "Existed_Raid", 00:11:42.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.410 "strip_size_kb": 64, 00:11:42.410 "state": "configuring", 00:11:42.410 "raid_level": "concat", 00:11:42.410 "superblock": false, 00:11:42.410 "num_base_bdevs": 4, 00:11:42.410 "num_base_bdevs_discovered": 3, 00:11:42.410 "num_base_bdevs_operational": 4, 00:11:42.410 "base_bdevs_list": [ 00:11:42.410 { 00:11:42.410 "name": "BaseBdev1", 00:11:42.410 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:42.410 "is_configured": true, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 }, 00:11:42.410 { 00:11:42.410 "name": null, 00:11:42.410 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:42.410 "is_configured": false, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 }, 00:11:42.410 { 00:11:42.410 "name": "BaseBdev3", 00:11:42.410 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:42.410 "is_configured": true, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 }, 00:11:42.410 { 00:11:42.410 "name": "BaseBdev4", 00:11:42.410 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:42.410 "is_configured": true, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 } 00:11:42.410 ] 00:11:42.410 }' 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.410 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.670 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.670 [2024-11-20 17:46:09.820098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.930 "name": "Existed_Raid", 00:11:42.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.930 "strip_size_kb": 64, 00:11:42.930 "state": "configuring", 00:11:42.930 "raid_level": "concat", 00:11:42.930 "superblock": false, 00:11:42.930 "num_base_bdevs": 4, 00:11:42.930 "num_base_bdevs_discovered": 2, 00:11:42.930 "num_base_bdevs_operational": 4, 00:11:42.930 "base_bdevs_list": [ 00:11:42.930 { 00:11:42.930 "name": null, 00:11:42.930 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:42.930 "is_configured": false, 00:11:42.930 "data_offset": 0, 00:11:42.930 "data_size": 65536 00:11:42.930 }, 00:11:42.930 { 00:11:42.930 "name": null, 00:11:42.930 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:42.930 "is_configured": false, 00:11:42.930 "data_offset": 0, 00:11:42.930 "data_size": 65536 00:11:42.930 }, 00:11:42.930 { 00:11:42.930 "name": "BaseBdev3", 00:11:42.930 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:42.930 "is_configured": true, 00:11:42.930 "data_offset": 0, 00:11:42.930 "data_size": 65536 00:11:42.930 }, 00:11:42.930 { 00:11:42.930 "name": "BaseBdev4", 00:11:42.930 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:42.930 "is_configured": true, 00:11:42.930 "data_offset": 0, 00:11:42.930 "data_size": 65536 00:11:42.930 } 00:11:42.930 ] 00:11:42.930 }' 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.930 17:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.499 [2024-11-20 17:46:10.451373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.499 "name": "Existed_Raid", 00:11:43.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.499 "strip_size_kb": 64, 00:11:43.499 "state": "configuring", 00:11:43.499 "raid_level": "concat", 00:11:43.499 "superblock": false, 00:11:43.499 "num_base_bdevs": 4, 00:11:43.499 "num_base_bdevs_discovered": 3, 00:11:43.499 "num_base_bdevs_operational": 4, 00:11:43.499 "base_bdevs_list": [ 00:11:43.499 { 00:11:43.499 "name": null, 00:11:43.499 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:43.499 "is_configured": false, 00:11:43.499 "data_offset": 0, 00:11:43.499 "data_size": 65536 00:11:43.499 }, 00:11:43.499 { 00:11:43.499 "name": "BaseBdev2", 00:11:43.499 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:43.499 "is_configured": true, 00:11:43.499 "data_offset": 0, 00:11:43.499 "data_size": 65536 00:11:43.499 }, 00:11:43.499 { 00:11:43.499 "name": "BaseBdev3", 00:11:43.499 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:43.499 "is_configured": true, 00:11:43.499 "data_offset": 0, 00:11:43.499 "data_size": 65536 00:11:43.499 }, 00:11:43.499 { 00:11:43.499 "name": "BaseBdev4", 00:11:43.499 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:43.499 "is_configured": true, 00:11:43.499 "data_offset": 0, 00:11:43.499 "data_size": 65536 00:11:43.499 } 00:11:43.499 ] 00:11:43.499 }' 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.499 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.759 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.759 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.759 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.759 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.759 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.019 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.019 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.019 17:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.019 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.019 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 17:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.019 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 92b06948-109a-485a-a95b-4dca3f3405a3 00:11:44.019 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.019 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 [2024-11-20 17:46:11.066624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.020 [2024-11-20 17:46:11.066692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.020 [2024-11-20 17:46:11.066701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:44.020 [2024-11-20 17:46:11.066997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.020 [2024-11-20 17:46:11.067215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.020 [2024-11-20 17:46:11.067229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.020 [2024-11-20 17:46:11.067501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.020 NewBaseBdev 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 [ 00:11:44.020 { 00:11:44.020 "name": "NewBaseBdev", 00:11:44.020 "aliases": [ 00:11:44.020 "92b06948-109a-485a-a95b-4dca3f3405a3" 00:11:44.020 ], 00:11:44.020 "product_name": "Malloc disk", 00:11:44.020 "block_size": 512, 00:11:44.020 "num_blocks": 65536, 00:11:44.020 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:44.020 "assigned_rate_limits": { 00:11:44.020 "rw_ios_per_sec": 0, 00:11:44.020 "rw_mbytes_per_sec": 0, 00:11:44.020 "r_mbytes_per_sec": 0, 00:11:44.020 "w_mbytes_per_sec": 0 00:11:44.020 }, 00:11:44.020 "claimed": true, 00:11:44.020 "claim_type": "exclusive_write", 00:11:44.020 "zoned": false, 00:11:44.020 "supported_io_types": { 00:11:44.020 "read": true, 00:11:44.020 "write": true, 00:11:44.020 "unmap": true, 00:11:44.020 "flush": true, 00:11:44.020 "reset": true, 00:11:44.020 "nvme_admin": false, 00:11:44.020 "nvme_io": false, 00:11:44.020 "nvme_io_md": false, 00:11:44.020 "write_zeroes": true, 00:11:44.020 "zcopy": true, 00:11:44.020 "get_zone_info": false, 00:11:44.020 "zone_management": false, 00:11:44.020 "zone_append": false, 00:11:44.020 "compare": false, 00:11:44.020 "compare_and_write": false, 00:11:44.020 "abort": true, 00:11:44.020 "seek_hole": false, 00:11:44.020 "seek_data": false, 00:11:44.020 "copy": true, 00:11:44.020 "nvme_iov_md": false 00:11:44.020 }, 00:11:44.020 "memory_domains": [ 00:11:44.020 { 00:11:44.020 "dma_device_id": "system", 00:11:44.020 "dma_device_type": 1 00:11:44.020 }, 00:11:44.020 { 00:11:44.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.020 "dma_device_type": 2 00:11:44.020 } 00:11:44.020 ], 00:11:44.020 "driver_specific": {} 00:11:44.020 } 00:11:44.020 ] 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.020 "name": "Existed_Raid", 00:11:44.020 "uuid": "db5ba8b0-5edc-48f5-ac87-17fa01f95c76", 00:11:44.020 "strip_size_kb": 64, 00:11:44.020 "state": "online", 00:11:44.020 "raid_level": "concat", 00:11:44.020 "superblock": false, 00:11:44.020 "num_base_bdevs": 4, 00:11:44.020 "num_base_bdevs_discovered": 4, 00:11:44.020 "num_base_bdevs_operational": 4, 00:11:44.020 "base_bdevs_list": [ 00:11:44.020 { 00:11:44.020 "name": "NewBaseBdev", 00:11:44.020 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:44.020 "is_configured": true, 00:11:44.020 "data_offset": 0, 00:11:44.020 "data_size": 65536 00:11:44.020 }, 00:11:44.020 { 00:11:44.020 "name": "BaseBdev2", 00:11:44.020 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:44.020 "is_configured": true, 00:11:44.020 "data_offset": 0, 00:11:44.020 "data_size": 65536 00:11:44.020 }, 00:11:44.020 { 00:11:44.020 "name": "BaseBdev3", 00:11:44.020 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:44.020 "is_configured": true, 00:11:44.020 "data_offset": 0, 00:11:44.020 "data_size": 65536 00:11:44.020 }, 00:11:44.020 { 00:11:44.020 "name": "BaseBdev4", 00:11:44.020 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:44.020 "is_configured": true, 00:11:44.020 "data_offset": 0, 00:11:44.020 "data_size": 65536 00:11:44.020 } 00:11:44.020 ] 00:11:44.020 }' 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.020 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.590 [2024-11-20 17:46:11.562185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.590 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.590 "name": "Existed_Raid", 00:11:44.590 "aliases": [ 00:11:44.590 "db5ba8b0-5edc-48f5-ac87-17fa01f95c76" 00:11:44.590 ], 00:11:44.590 "product_name": "Raid Volume", 00:11:44.590 "block_size": 512, 00:11:44.590 "num_blocks": 262144, 00:11:44.590 "uuid": "db5ba8b0-5edc-48f5-ac87-17fa01f95c76", 00:11:44.590 "assigned_rate_limits": { 00:11:44.590 "rw_ios_per_sec": 0, 00:11:44.590 "rw_mbytes_per_sec": 0, 00:11:44.590 "r_mbytes_per_sec": 0, 00:11:44.590 "w_mbytes_per_sec": 0 00:11:44.590 }, 00:11:44.590 "claimed": false, 00:11:44.590 "zoned": false, 00:11:44.590 "supported_io_types": { 00:11:44.590 "read": true, 00:11:44.590 "write": true, 00:11:44.590 "unmap": true, 00:11:44.590 "flush": true, 00:11:44.590 "reset": true, 00:11:44.590 "nvme_admin": false, 00:11:44.590 "nvme_io": false, 00:11:44.590 "nvme_io_md": false, 00:11:44.590 "write_zeroes": true, 00:11:44.590 "zcopy": false, 00:11:44.590 "get_zone_info": false, 00:11:44.590 "zone_management": false, 00:11:44.590 "zone_append": false, 00:11:44.590 "compare": false, 00:11:44.590 "compare_and_write": false, 00:11:44.590 "abort": false, 00:11:44.590 "seek_hole": false, 00:11:44.590 "seek_data": false, 00:11:44.590 "copy": false, 00:11:44.590 "nvme_iov_md": false 00:11:44.590 }, 00:11:44.590 "memory_domains": [ 00:11:44.590 { 00:11:44.590 "dma_device_id": "system", 00:11:44.590 "dma_device_type": 1 00:11:44.590 }, 00:11:44.590 { 00:11:44.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.590 "dma_device_type": 2 00:11:44.590 }, 00:11:44.590 { 00:11:44.590 "dma_device_id": "system", 00:11:44.590 "dma_device_type": 1 00:11:44.590 }, 00:11:44.590 { 00:11:44.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.590 "dma_device_type": 2 00:11:44.590 }, 00:11:44.590 { 00:11:44.590 "dma_device_id": "system", 00:11:44.590 "dma_device_type": 1 00:11:44.590 }, 00:11:44.590 { 00:11:44.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.591 "dma_device_type": 2 00:11:44.591 }, 00:11:44.591 { 00:11:44.591 "dma_device_id": "system", 00:11:44.591 "dma_device_type": 1 00:11:44.591 }, 00:11:44.591 { 00:11:44.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.591 "dma_device_type": 2 00:11:44.591 } 00:11:44.591 ], 00:11:44.591 "driver_specific": { 00:11:44.591 "raid": { 00:11:44.591 "uuid": "db5ba8b0-5edc-48f5-ac87-17fa01f95c76", 00:11:44.591 "strip_size_kb": 64, 00:11:44.591 "state": "online", 00:11:44.591 "raid_level": "concat", 00:11:44.591 "superblock": false, 00:11:44.591 "num_base_bdevs": 4, 00:11:44.591 "num_base_bdevs_discovered": 4, 00:11:44.591 "num_base_bdevs_operational": 4, 00:11:44.591 "base_bdevs_list": [ 00:11:44.591 { 00:11:44.591 "name": "NewBaseBdev", 00:11:44.591 "uuid": "92b06948-109a-485a-a95b-4dca3f3405a3", 00:11:44.591 "is_configured": true, 00:11:44.591 "data_offset": 0, 00:11:44.591 "data_size": 65536 00:11:44.591 }, 00:11:44.591 { 00:11:44.591 "name": "BaseBdev2", 00:11:44.591 "uuid": "764ae030-e8bb-42e5-a617-f51c56f0e26f", 00:11:44.591 "is_configured": true, 00:11:44.591 "data_offset": 0, 00:11:44.591 "data_size": 65536 00:11:44.591 }, 00:11:44.591 { 00:11:44.591 "name": "BaseBdev3", 00:11:44.591 "uuid": "0d6c1bf2-5c3f-4214-9642-1330f9309fbe", 00:11:44.591 "is_configured": true, 00:11:44.591 "data_offset": 0, 00:11:44.591 "data_size": 65536 00:11:44.591 }, 00:11:44.591 { 00:11:44.591 "name": "BaseBdev4", 00:11:44.591 "uuid": "c9330afe-46a3-48ba-93eb-7da00e16071b", 00:11:44.591 "is_configured": true, 00:11:44.591 "data_offset": 0, 00:11:44.591 "data_size": 65536 00:11:44.591 } 00:11:44.591 ] 00:11:44.591 } 00:11:44.591 } 00:11:44.591 }' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.591 BaseBdev2 00:11:44.591 BaseBdev3 00:11:44.591 BaseBdev4' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.591 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.851 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.852 [2024-11-20 17:46:11.881310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.852 [2024-11-20 17:46:11.881438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.852 [2024-11-20 17:46:11.881541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.852 [2024-11-20 17:46:11.881623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.852 [2024-11-20 17:46:11.881634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71707 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71707 ']' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71707 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71707 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71707' 00:11:44.852 killing process with pid 71707 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71707 00:11:44.852 17:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71707 00:11:44.852 [2024-11-20 17:46:11.924453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.421 [2024-11-20 17:46:12.359803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.802 00:11:46.802 real 0m12.154s 00:11:46.802 user 0m19.072s 00:11:46.802 sys 0m2.195s 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.802 ************************************ 00:11:46.802 END TEST raid_state_function_test 00:11:46.802 ************************************ 00:11:46.802 17:46:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:46.802 17:46:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:46.802 17:46:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.802 17:46:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.802 ************************************ 00:11:46.802 START TEST raid_state_function_test_sb 00:11:46.802 ************************************ 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72386 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:46.802 Process raid pid: 72386 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72386' 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72386 00:11:46.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72386 ']' 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.802 17:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.802 [2024-11-20 17:46:13.780840] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:46.802 [2024-11-20 17:46:13.781055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.802 [2024-11-20 17:46:13.959214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.060 [2024-11-20 17:46:14.104185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.319 [2024-11-20 17:46:14.354741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.319 [2024-11-20 17:46:14.354797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.578 [2024-11-20 17:46:14.681279] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.578 [2024-11-20 17:46:14.681471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.578 [2024-11-20 17:46:14.681494] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.578 [2024-11-20 17:46:14.681505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.578 [2024-11-20 17:46:14.681512] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.578 [2024-11-20 17:46:14.681523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.578 [2024-11-20 17:46:14.681529] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:47.578 [2024-11-20 17:46:14.681539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.578 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.578 "name": "Existed_Raid", 00:11:47.578 "uuid": "07cdb8a0-53a0-4610-83c5-f54759edf7a4", 00:11:47.578 "strip_size_kb": 64, 00:11:47.578 "state": "configuring", 00:11:47.578 "raid_level": "concat", 00:11:47.578 "superblock": true, 00:11:47.578 "num_base_bdevs": 4, 00:11:47.578 "num_base_bdevs_discovered": 0, 00:11:47.578 "num_base_bdevs_operational": 4, 00:11:47.578 "base_bdevs_list": [ 00:11:47.579 { 00:11:47.579 "name": "BaseBdev1", 00:11:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.579 "is_configured": false, 00:11:47.579 "data_offset": 0, 00:11:47.579 "data_size": 0 00:11:47.579 }, 00:11:47.579 { 00:11:47.579 "name": "BaseBdev2", 00:11:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.579 "is_configured": false, 00:11:47.579 "data_offset": 0, 00:11:47.579 "data_size": 0 00:11:47.579 }, 00:11:47.579 { 00:11:47.579 "name": "BaseBdev3", 00:11:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.579 "is_configured": false, 00:11:47.579 "data_offset": 0, 00:11:47.579 "data_size": 0 00:11:47.579 }, 00:11:47.579 { 00:11:47.579 "name": "BaseBdev4", 00:11:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.579 "is_configured": false, 00:11:47.579 "data_offset": 0, 00:11:47.579 "data_size": 0 00:11:47.579 } 00:11:47.579 ] 00:11:47.579 }' 00:11:47.579 17:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.579 17:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.187 [2024-11-20 17:46:15.152475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.187 [2024-11-20 17:46:15.152545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.187 [2024-11-20 17:46:15.164433] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.187 [2024-11-20 17:46:15.164487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.187 [2024-11-20 17:46:15.164499] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.187 [2024-11-20 17:46:15.164509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.187 [2024-11-20 17:46:15.164516] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.187 [2024-11-20 17:46:15.164526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.187 [2024-11-20 17:46:15.164532] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:48.187 [2024-11-20 17:46:15.164542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.187 [2024-11-20 17:46:15.221271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.187 BaseBdev1 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:48.187 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.188 [ 00:11:48.188 { 00:11:48.188 "name": "BaseBdev1", 00:11:48.188 "aliases": [ 00:11:48.188 "21f16ff8-c8bc-409c-8e02-110c9e77874f" 00:11:48.188 ], 00:11:48.188 "product_name": "Malloc disk", 00:11:48.188 "block_size": 512, 00:11:48.188 "num_blocks": 65536, 00:11:48.188 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:48.188 "assigned_rate_limits": { 00:11:48.188 "rw_ios_per_sec": 0, 00:11:48.188 "rw_mbytes_per_sec": 0, 00:11:48.188 "r_mbytes_per_sec": 0, 00:11:48.188 "w_mbytes_per_sec": 0 00:11:48.188 }, 00:11:48.188 "claimed": true, 00:11:48.188 "claim_type": "exclusive_write", 00:11:48.188 "zoned": false, 00:11:48.188 "supported_io_types": { 00:11:48.188 "read": true, 00:11:48.188 "write": true, 00:11:48.188 "unmap": true, 00:11:48.188 "flush": true, 00:11:48.188 "reset": true, 00:11:48.188 "nvme_admin": false, 00:11:48.188 "nvme_io": false, 00:11:48.188 "nvme_io_md": false, 00:11:48.188 "write_zeroes": true, 00:11:48.188 "zcopy": true, 00:11:48.188 "get_zone_info": false, 00:11:48.188 "zone_management": false, 00:11:48.188 "zone_append": false, 00:11:48.188 "compare": false, 00:11:48.188 "compare_and_write": false, 00:11:48.188 "abort": true, 00:11:48.188 "seek_hole": false, 00:11:48.188 "seek_data": false, 00:11:48.188 "copy": true, 00:11:48.188 "nvme_iov_md": false 00:11:48.188 }, 00:11:48.188 "memory_domains": [ 00:11:48.188 { 00:11:48.188 "dma_device_id": "system", 00:11:48.188 "dma_device_type": 1 00:11:48.188 }, 00:11:48.188 { 00:11:48.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.188 "dma_device_type": 2 00:11:48.188 } 00:11:48.188 ], 00:11:48.188 "driver_specific": {} 00:11:48.188 } 00:11:48.188 ] 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.188 "name": "Existed_Raid", 00:11:48.188 "uuid": "c8e65662-afee-405d-b877-17fe5208ffb2", 00:11:48.188 "strip_size_kb": 64, 00:11:48.188 "state": "configuring", 00:11:48.188 "raid_level": "concat", 00:11:48.188 "superblock": true, 00:11:48.188 "num_base_bdevs": 4, 00:11:48.188 "num_base_bdevs_discovered": 1, 00:11:48.188 "num_base_bdevs_operational": 4, 00:11:48.188 "base_bdevs_list": [ 00:11:48.188 { 00:11:48.188 "name": "BaseBdev1", 00:11:48.188 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:48.188 "is_configured": true, 00:11:48.188 "data_offset": 2048, 00:11:48.188 "data_size": 63488 00:11:48.188 }, 00:11:48.188 { 00:11:48.188 "name": "BaseBdev2", 00:11:48.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.188 "is_configured": false, 00:11:48.188 "data_offset": 0, 00:11:48.188 "data_size": 0 00:11:48.188 }, 00:11:48.188 { 00:11:48.188 "name": "BaseBdev3", 00:11:48.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.188 "is_configured": false, 00:11:48.188 "data_offset": 0, 00:11:48.188 "data_size": 0 00:11:48.188 }, 00:11:48.188 { 00:11:48.188 "name": "BaseBdev4", 00:11:48.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.188 "is_configured": false, 00:11:48.188 "data_offset": 0, 00:11:48.188 "data_size": 0 00:11:48.188 } 00:11:48.188 ] 00:11:48.188 }' 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.188 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.755 [2024-11-20 17:46:15.720520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.755 [2024-11-20 17:46:15.720694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.755 [2024-11-20 17:46:15.732569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.755 [2024-11-20 17:46:15.734775] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.755 [2024-11-20 17:46:15.734823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.755 [2024-11-20 17:46:15.734835] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.755 [2024-11-20 17:46:15.734846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.755 [2024-11-20 17:46:15.734853] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:48.755 [2024-11-20 17:46:15.734861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.755 "name": "Existed_Raid", 00:11:48.755 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:48.755 "strip_size_kb": 64, 00:11:48.755 "state": "configuring", 00:11:48.755 "raid_level": "concat", 00:11:48.755 "superblock": true, 00:11:48.755 "num_base_bdevs": 4, 00:11:48.755 "num_base_bdevs_discovered": 1, 00:11:48.755 "num_base_bdevs_operational": 4, 00:11:48.755 "base_bdevs_list": [ 00:11:48.755 { 00:11:48.755 "name": "BaseBdev1", 00:11:48.755 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:48.755 "is_configured": true, 00:11:48.755 "data_offset": 2048, 00:11:48.755 "data_size": 63488 00:11:48.755 }, 00:11:48.755 { 00:11:48.755 "name": "BaseBdev2", 00:11:48.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.755 "is_configured": false, 00:11:48.755 "data_offset": 0, 00:11:48.755 "data_size": 0 00:11:48.755 }, 00:11:48.755 { 00:11:48.755 "name": "BaseBdev3", 00:11:48.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.755 "is_configured": false, 00:11:48.755 "data_offset": 0, 00:11:48.755 "data_size": 0 00:11:48.755 }, 00:11:48.755 { 00:11:48.755 "name": "BaseBdev4", 00:11:48.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.755 "is_configured": false, 00:11:48.755 "data_offset": 0, 00:11:48.755 "data_size": 0 00:11:48.755 } 00:11:48.755 ] 00:11:48.755 }' 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.755 17:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.013 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.013 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.013 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.271 [2024-11-20 17:46:16.227480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.271 BaseBdev2 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.271 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.272 [ 00:11:49.272 { 00:11:49.272 "name": "BaseBdev2", 00:11:49.272 "aliases": [ 00:11:49.272 "09a786b7-e959-43f0-9b7f-19b0a83884f5" 00:11:49.272 ], 00:11:49.272 "product_name": "Malloc disk", 00:11:49.272 "block_size": 512, 00:11:49.272 "num_blocks": 65536, 00:11:49.272 "uuid": "09a786b7-e959-43f0-9b7f-19b0a83884f5", 00:11:49.272 "assigned_rate_limits": { 00:11:49.272 "rw_ios_per_sec": 0, 00:11:49.272 "rw_mbytes_per_sec": 0, 00:11:49.272 "r_mbytes_per_sec": 0, 00:11:49.272 "w_mbytes_per_sec": 0 00:11:49.272 }, 00:11:49.272 "claimed": true, 00:11:49.272 "claim_type": "exclusive_write", 00:11:49.272 "zoned": false, 00:11:49.272 "supported_io_types": { 00:11:49.272 "read": true, 00:11:49.272 "write": true, 00:11:49.272 "unmap": true, 00:11:49.272 "flush": true, 00:11:49.272 "reset": true, 00:11:49.272 "nvme_admin": false, 00:11:49.272 "nvme_io": false, 00:11:49.272 "nvme_io_md": false, 00:11:49.272 "write_zeroes": true, 00:11:49.272 "zcopy": true, 00:11:49.272 "get_zone_info": false, 00:11:49.272 "zone_management": false, 00:11:49.272 "zone_append": false, 00:11:49.272 "compare": false, 00:11:49.272 "compare_and_write": false, 00:11:49.272 "abort": true, 00:11:49.272 "seek_hole": false, 00:11:49.272 "seek_data": false, 00:11:49.272 "copy": true, 00:11:49.272 "nvme_iov_md": false 00:11:49.272 }, 00:11:49.272 "memory_domains": [ 00:11:49.272 { 00:11:49.272 "dma_device_id": "system", 00:11:49.272 "dma_device_type": 1 00:11:49.272 }, 00:11:49.272 { 00:11:49.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.272 "dma_device_type": 2 00:11:49.272 } 00:11:49.272 ], 00:11:49.272 "driver_specific": {} 00:11:49.272 } 00:11:49.272 ] 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.272 "name": "Existed_Raid", 00:11:49.272 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:49.272 "strip_size_kb": 64, 00:11:49.272 "state": "configuring", 00:11:49.272 "raid_level": "concat", 00:11:49.272 "superblock": true, 00:11:49.272 "num_base_bdevs": 4, 00:11:49.272 "num_base_bdevs_discovered": 2, 00:11:49.272 "num_base_bdevs_operational": 4, 00:11:49.272 "base_bdevs_list": [ 00:11:49.272 { 00:11:49.272 "name": "BaseBdev1", 00:11:49.272 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:49.272 "is_configured": true, 00:11:49.272 "data_offset": 2048, 00:11:49.272 "data_size": 63488 00:11:49.272 }, 00:11:49.272 { 00:11:49.272 "name": "BaseBdev2", 00:11:49.272 "uuid": "09a786b7-e959-43f0-9b7f-19b0a83884f5", 00:11:49.272 "is_configured": true, 00:11:49.272 "data_offset": 2048, 00:11:49.272 "data_size": 63488 00:11:49.272 }, 00:11:49.272 { 00:11:49.272 "name": "BaseBdev3", 00:11:49.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.272 "is_configured": false, 00:11:49.272 "data_offset": 0, 00:11:49.272 "data_size": 0 00:11:49.272 }, 00:11:49.272 { 00:11:49.272 "name": "BaseBdev4", 00:11:49.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.272 "is_configured": false, 00:11:49.272 "data_offset": 0, 00:11:49.272 "data_size": 0 00:11:49.272 } 00:11:49.272 ] 00:11:49.272 }' 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.272 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.841 [2024-11-20 17:46:16.768329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.841 BaseBdev3 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.841 [ 00:11:49.841 { 00:11:49.841 "name": "BaseBdev3", 00:11:49.841 "aliases": [ 00:11:49.841 "0d4c3d94-3fa3-4bc4-a269-2e4fe47fb723" 00:11:49.841 ], 00:11:49.841 "product_name": "Malloc disk", 00:11:49.841 "block_size": 512, 00:11:49.841 "num_blocks": 65536, 00:11:49.841 "uuid": "0d4c3d94-3fa3-4bc4-a269-2e4fe47fb723", 00:11:49.841 "assigned_rate_limits": { 00:11:49.841 "rw_ios_per_sec": 0, 00:11:49.841 "rw_mbytes_per_sec": 0, 00:11:49.841 "r_mbytes_per_sec": 0, 00:11:49.841 "w_mbytes_per_sec": 0 00:11:49.841 }, 00:11:49.841 "claimed": true, 00:11:49.841 "claim_type": "exclusive_write", 00:11:49.841 "zoned": false, 00:11:49.841 "supported_io_types": { 00:11:49.841 "read": true, 00:11:49.841 "write": true, 00:11:49.841 "unmap": true, 00:11:49.841 "flush": true, 00:11:49.841 "reset": true, 00:11:49.841 "nvme_admin": false, 00:11:49.841 "nvme_io": false, 00:11:49.841 "nvme_io_md": false, 00:11:49.841 "write_zeroes": true, 00:11:49.841 "zcopy": true, 00:11:49.841 "get_zone_info": false, 00:11:49.841 "zone_management": false, 00:11:49.841 "zone_append": false, 00:11:49.841 "compare": false, 00:11:49.841 "compare_and_write": false, 00:11:49.841 "abort": true, 00:11:49.841 "seek_hole": false, 00:11:49.841 "seek_data": false, 00:11:49.841 "copy": true, 00:11:49.841 "nvme_iov_md": false 00:11:49.841 }, 00:11:49.841 "memory_domains": [ 00:11:49.841 { 00:11:49.841 "dma_device_id": "system", 00:11:49.841 "dma_device_type": 1 00:11:49.841 }, 00:11:49.841 { 00:11:49.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.841 "dma_device_type": 2 00:11:49.841 } 00:11:49.841 ], 00:11:49.841 "driver_specific": {} 00:11:49.841 } 00:11:49.841 ] 00:11:49.841 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.842 "name": "Existed_Raid", 00:11:49.842 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:49.842 "strip_size_kb": 64, 00:11:49.842 "state": "configuring", 00:11:49.842 "raid_level": "concat", 00:11:49.842 "superblock": true, 00:11:49.842 "num_base_bdevs": 4, 00:11:49.842 "num_base_bdevs_discovered": 3, 00:11:49.842 "num_base_bdevs_operational": 4, 00:11:49.842 "base_bdevs_list": [ 00:11:49.842 { 00:11:49.842 "name": "BaseBdev1", 00:11:49.842 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:49.842 "is_configured": true, 00:11:49.842 "data_offset": 2048, 00:11:49.842 "data_size": 63488 00:11:49.842 }, 00:11:49.842 { 00:11:49.842 "name": "BaseBdev2", 00:11:49.842 "uuid": "09a786b7-e959-43f0-9b7f-19b0a83884f5", 00:11:49.842 "is_configured": true, 00:11:49.842 "data_offset": 2048, 00:11:49.842 "data_size": 63488 00:11:49.842 }, 00:11:49.842 { 00:11:49.842 "name": "BaseBdev3", 00:11:49.842 "uuid": "0d4c3d94-3fa3-4bc4-a269-2e4fe47fb723", 00:11:49.842 "is_configured": true, 00:11:49.842 "data_offset": 2048, 00:11:49.842 "data_size": 63488 00:11:49.842 }, 00:11:49.842 { 00:11:49.842 "name": "BaseBdev4", 00:11:49.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.842 "is_configured": false, 00:11:49.842 "data_offset": 0, 00:11:49.842 "data_size": 0 00:11:49.842 } 00:11:49.842 ] 00:11:49.842 }' 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.842 17:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.101 [2024-11-20 17:46:17.259699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.101 [2024-11-20 17:46:17.260047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:50.101 [2024-11-20 17:46:17.260066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.101 [2024-11-20 17:46:17.260381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:50.101 [2024-11-20 17:46:17.260547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:50.101 [2024-11-20 17:46:17.260643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:50.101 BaseBdev4 00:11:50.101 [2024-11-20 17:46:17.260943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.101 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 [ 00:11:50.361 { 00:11:50.361 "name": "BaseBdev4", 00:11:50.361 "aliases": [ 00:11:50.361 "43a44a03-7a85-44ab-91c5-cd37ac9cdd2e" 00:11:50.361 ], 00:11:50.361 "product_name": "Malloc disk", 00:11:50.361 "block_size": 512, 00:11:50.361 "num_blocks": 65536, 00:11:50.361 "uuid": "43a44a03-7a85-44ab-91c5-cd37ac9cdd2e", 00:11:50.361 "assigned_rate_limits": { 00:11:50.361 "rw_ios_per_sec": 0, 00:11:50.361 "rw_mbytes_per_sec": 0, 00:11:50.361 "r_mbytes_per_sec": 0, 00:11:50.361 "w_mbytes_per_sec": 0 00:11:50.361 }, 00:11:50.361 "claimed": true, 00:11:50.361 "claim_type": "exclusive_write", 00:11:50.361 "zoned": false, 00:11:50.361 "supported_io_types": { 00:11:50.361 "read": true, 00:11:50.361 "write": true, 00:11:50.361 "unmap": true, 00:11:50.361 "flush": true, 00:11:50.361 "reset": true, 00:11:50.361 "nvme_admin": false, 00:11:50.361 "nvme_io": false, 00:11:50.361 "nvme_io_md": false, 00:11:50.361 "write_zeroes": true, 00:11:50.361 "zcopy": true, 00:11:50.361 "get_zone_info": false, 00:11:50.361 "zone_management": false, 00:11:50.361 "zone_append": false, 00:11:50.361 "compare": false, 00:11:50.361 "compare_and_write": false, 00:11:50.361 "abort": true, 00:11:50.361 "seek_hole": false, 00:11:50.361 "seek_data": false, 00:11:50.361 "copy": true, 00:11:50.361 "nvme_iov_md": false 00:11:50.361 }, 00:11:50.361 "memory_domains": [ 00:11:50.361 { 00:11:50.361 "dma_device_id": "system", 00:11:50.361 "dma_device_type": 1 00:11:50.361 }, 00:11:50.361 { 00:11:50.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.361 "dma_device_type": 2 00:11:50.361 } 00:11:50.361 ], 00:11:50.361 "driver_specific": {} 00:11:50.361 } 00:11:50.361 ] 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.361 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.361 "name": "Existed_Raid", 00:11:50.361 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:50.361 "strip_size_kb": 64, 00:11:50.361 "state": "online", 00:11:50.361 "raid_level": "concat", 00:11:50.361 "superblock": true, 00:11:50.361 "num_base_bdevs": 4, 00:11:50.361 "num_base_bdevs_discovered": 4, 00:11:50.361 "num_base_bdevs_operational": 4, 00:11:50.361 "base_bdevs_list": [ 00:11:50.361 { 00:11:50.361 "name": "BaseBdev1", 00:11:50.361 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:50.361 "is_configured": true, 00:11:50.361 "data_offset": 2048, 00:11:50.361 "data_size": 63488 00:11:50.361 }, 00:11:50.361 { 00:11:50.361 "name": "BaseBdev2", 00:11:50.361 "uuid": "09a786b7-e959-43f0-9b7f-19b0a83884f5", 00:11:50.361 "is_configured": true, 00:11:50.361 "data_offset": 2048, 00:11:50.361 "data_size": 63488 00:11:50.361 }, 00:11:50.361 { 00:11:50.361 "name": "BaseBdev3", 00:11:50.361 "uuid": "0d4c3d94-3fa3-4bc4-a269-2e4fe47fb723", 00:11:50.361 "is_configured": true, 00:11:50.361 "data_offset": 2048, 00:11:50.361 "data_size": 63488 00:11:50.361 }, 00:11:50.361 { 00:11:50.361 "name": "BaseBdev4", 00:11:50.362 "uuid": "43a44a03-7a85-44ab-91c5-cd37ac9cdd2e", 00:11:50.362 "is_configured": true, 00:11:50.362 "data_offset": 2048, 00:11:50.362 "data_size": 63488 00:11:50.362 } 00:11:50.362 ] 00:11:50.362 }' 00:11:50.362 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.362 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.621 [2024-11-20 17:46:17.759470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.621 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.621 "name": "Existed_Raid", 00:11:50.621 "aliases": [ 00:11:50.621 "ba7800d0-a37b-4156-9dfc-a37e29bda387" 00:11:50.621 ], 00:11:50.621 "product_name": "Raid Volume", 00:11:50.621 "block_size": 512, 00:11:50.621 "num_blocks": 253952, 00:11:50.621 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:50.621 "assigned_rate_limits": { 00:11:50.621 "rw_ios_per_sec": 0, 00:11:50.621 "rw_mbytes_per_sec": 0, 00:11:50.621 "r_mbytes_per_sec": 0, 00:11:50.621 "w_mbytes_per_sec": 0 00:11:50.621 }, 00:11:50.621 "claimed": false, 00:11:50.621 "zoned": false, 00:11:50.621 "supported_io_types": { 00:11:50.621 "read": true, 00:11:50.621 "write": true, 00:11:50.621 "unmap": true, 00:11:50.621 "flush": true, 00:11:50.621 "reset": true, 00:11:50.621 "nvme_admin": false, 00:11:50.621 "nvme_io": false, 00:11:50.621 "nvme_io_md": false, 00:11:50.621 "write_zeroes": true, 00:11:50.621 "zcopy": false, 00:11:50.621 "get_zone_info": false, 00:11:50.621 "zone_management": false, 00:11:50.621 "zone_append": false, 00:11:50.621 "compare": false, 00:11:50.621 "compare_and_write": false, 00:11:50.621 "abort": false, 00:11:50.621 "seek_hole": false, 00:11:50.621 "seek_data": false, 00:11:50.621 "copy": false, 00:11:50.621 "nvme_iov_md": false 00:11:50.621 }, 00:11:50.621 "memory_domains": [ 00:11:50.621 { 00:11:50.621 "dma_device_id": "system", 00:11:50.621 "dma_device_type": 1 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.621 "dma_device_type": 2 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "system", 00:11:50.621 "dma_device_type": 1 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.621 "dma_device_type": 2 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "system", 00:11:50.621 "dma_device_type": 1 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.621 "dma_device_type": 2 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "system", 00:11:50.621 "dma_device_type": 1 00:11:50.621 }, 00:11:50.621 { 00:11:50.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.621 "dma_device_type": 2 00:11:50.621 } 00:11:50.621 ], 00:11:50.621 "driver_specific": { 00:11:50.621 "raid": { 00:11:50.621 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:50.621 "strip_size_kb": 64, 00:11:50.621 "state": "online", 00:11:50.621 "raid_level": "concat", 00:11:50.621 "superblock": true, 00:11:50.621 "num_base_bdevs": 4, 00:11:50.621 "num_base_bdevs_discovered": 4, 00:11:50.621 "num_base_bdevs_operational": 4, 00:11:50.621 "base_bdevs_list": [ 00:11:50.621 { 00:11:50.621 "name": "BaseBdev1", 00:11:50.621 "uuid": "21f16ff8-c8bc-409c-8e02-110c9e77874f", 00:11:50.622 "is_configured": true, 00:11:50.622 "data_offset": 2048, 00:11:50.622 "data_size": 63488 00:11:50.622 }, 00:11:50.622 { 00:11:50.622 "name": "BaseBdev2", 00:11:50.622 "uuid": "09a786b7-e959-43f0-9b7f-19b0a83884f5", 00:11:50.622 "is_configured": true, 00:11:50.622 "data_offset": 2048, 00:11:50.622 "data_size": 63488 00:11:50.622 }, 00:11:50.622 { 00:11:50.622 "name": "BaseBdev3", 00:11:50.622 "uuid": "0d4c3d94-3fa3-4bc4-a269-2e4fe47fb723", 00:11:50.622 "is_configured": true, 00:11:50.622 "data_offset": 2048, 00:11:50.622 "data_size": 63488 00:11:50.622 }, 00:11:50.622 { 00:11:50.622 "name": "BaseBdev4", 00:11:50.622 "uuid": "43a44a03-7a85-44ab-91c5-cd37ac9cdd2e", 00:11:50.622 "is_configured": true, 00:11:50.622 "data_offset": 2048, 00:11:50.622 "data_size": 63488 00:11:50.622 } 00:11:50.622 ] 00:11:50.622 } 00:11:50.622 } 00:11:50.622 }' 00:11:50.622 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:50.882 BaseBdev2 00:11:50.882 BaseBdev3 00:11:50.882 BaseBdev4' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 17:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.143 [2024-11-20 17:46:18.086605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.143 [2024-11-20 17:46:18.086752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.143 [2024-11-20 17:46:18.086869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.143 "name": "Existed_Raid", 00:11:51.143 "uuid": "ba7800d0-a37b-4156-9dfc-a37e29bda387", 00:11:51.143 "strip_size_kb": 64, 00:11:51.143 "state": "offline", 00:11:51.143 "raid_level": "concat", 00:11:51.143 "superblock": true, 00:11:51.143 "num_base_bdevs": 4, 00:11:51.143 "num_base_bdevs_discovered": 3, 00:11:51.143 "num_base_bdevs_operational": 3, 00:11:51.143 "base_bdevs_list": [ 00:11:51.143 { 00:11:51.143 "name": null, 00:11:51.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.143 "is_configured": false, 00:11:51.143 "data_offset": 0, 00:11:51.143 "data_size": 63488 00:11:51.143 }, 00:11:51.143 { 00:11:51.143 "name": "BaseBdev2", 00:11:51.143 "uuid": "09a786b7-e959-43f0-9b7f-19b0a83884f5", 00:11:51.143 "is_configured": true, 00:11:51.143 "data_offset": 2048, 00:11:51.143 "data_size": 63488 00:11:51.143 }, 00:11:51.143 { 00:11:51.143 "name": "BaseBdev3", 00:11:51.143 "uuid": "0d4c3d94-3fa3-4bc4-a269-2e4fe47fb723", 00:11:51.143 "is_configured": true, 00:11:51.143 "data_offset": 2048, 00:11:51.143 "data_size": 63488 00:11:51.143 }, 00:11:51.143 { 00:11:51.143 "name": "BaseBdev4", 00:11:51.143 "uuid": "43a44a03-7a85-44ab-91c5-cd37ac9cdd2e", 00:11:51.143 "is_configured": true, 00:11:51.143 "data_offset": 2048, 00:11:51.143 "data_size": 63488 00:11:51.143 } 00:11:51.143 ] 00:11:51.143 }' 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.143 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.714 [2024-11-20 17:46:18.697702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.714 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.714 [2024-11-20 17:46:18.868293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.974 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.975 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.975 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.975 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.975 17:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.975 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.975 17:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.975 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.975 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.975 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.975 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:51.975 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.975 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.975 [2024-11-20 17:46:19.047929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:51.975 [2024-11-20 17:46:19.048125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 BaseBdev2 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 [ 00:11:52.236 { 00:11:52.236 "name": "BaseBdev2", 00:11:52.236 "aliases": [ 00:11:52.236 "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61" 00:11:52.236 ], 00:11:52.236 "product_name": "Malloc disk", 00:11:52.236 "block_size": 512, 00:11:52.236 "num_blocks": 65536, 00:11:52.236 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:52.236 "assigned_rate_limits": { 00:11:52.236 "rw_ios_per_sec": 0, 00:11:52.236 "rw_mbytes_per_sec": 0, 00:11:52.236 "r_mbytes_per_sec": 0, 00:11:52.236 "w_mbytes_per_sec": 0 00:11:52.236 }, 00:11:52.236 "claimed": false, 00:11:52.236 "zoned": false, 00:11:52.236 "supported_io_types": { 00:11:52.236 "read": true, 00:11:52.236 "write": true, 00:11:52.236 "unmap": true, 00:11:52.236 "flush": true, 00:11:52.236 "reset": true, 00:11:52.236 "nvme_admin": false, 00:11:52.236 "nvme_io": false, 00:11:52.236 "nvme_io_md": false, 00:11:52.236 "write_zeroes": true, 00:11:52.236 "zcopy": true, 00:11:52.236 "get_zone_info": false, 00:11:52.236 "zone_management": false, 00:11:52.236 "zone_append": false, 00:11:52.236 "compare": false, 00:11:52.236 "compare_and_write": false, 00:11:52.236 "abort": true, 00:11:52.236 "seek_hole": false, 00:11:52.236 "seek_data": false, 00:11:52.236 "copy": true, 00:11:52.236 "nvme_iov_md": false 00:11:52.236 }, 00:11:52.236 "memory_domains": [ 00:11:52.236 { 00:11:52.236 "dma_device_id": "system", 00:11:52.236 "dma_device_type": 1 00:11:52.236 }, 00:11:52.236 { 00:11:52.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.236 "dma_device_type": 2 00:11:52.236 } 00:11:52.236 ], 00:11:52.236 "driver_specific": {} 00:11:52.236 } 00:11:52.236 ] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 BaseBdev3 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.236 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 [ 00:11:52.236 { 00:11:52.236 "name": "BaseBdev3", 00:11:52.236 "aliases": [ 00:11:52.236 "050529bd-5645-4f63-a780-d3a6671874ce" 00:11:52.236 ], 00:11:52.236 "product_name": "Malloc disk", 00:11:52.236 "block_size": 512, 00:11:52.236 "num_blocks": 65536, 00:11:52.236 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:52.236 "assigned_rate_limits": { 00:11:52.236 "rw_ios_per_sec": 0, 00:11:52.236 "rw_mbytes_per_sec": 0, 00:11:52.236 "r_mbytes_per_sec": 0, 00:11:52.236 "w_mbytes_per_sec": 0 00:11:52.236 }, 00:11:52.236 "claimed": false, 00:11:52.236 "zoned": false, 00:11:52.236 "supported_io_types": { 00:11:52.236 "read": true, 00:11:52.236 "write": true, 00:11:52.236 "unmap": true, 00:11:52.236 "flush": true, 00:11:52.236 "reset": true, 00:11:52.236 "nvme_admin": false, 00:11:52.236 "nvme_io": false, 00:11:52.236 "nvme_io_md": false, 00:11:52.236 "write_zeroes": true, 00:11:52.236 "zcopy": true, 00:11:52.236 "get_zone_info": false, 00:11:52.236 "zone_management": false, 00:11:52.236 "zone_append": false, 00:11:52.236 "compare": false, 00:11:52.236 "compare_and_write": false, 00:11:52.236 "abort": true, 00:11:52.236 "seek_hole": false, 00:11:52.236 "seek_data": false, 00:11:52.237 "copy": true, 00:11:52.237 "nvme_iov_md": false 00:11:52.237 }, 00:11:52.237 "memory_domains": [ 00:11:52.237 { 00:11:52.237 "dma_device_id": "system", 00:11:52.237 "dma_device_type": 1 00:11:52.237 }, 00:11:52.237 { 00:11:52.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.237 "dma_device_type": 2 00:11:52.237 } 00:11:52.237 ], 00:11:52.237 "driver_specific": {} 00:11:52.237 } 00:11:52.237 ] 00:11:52.237 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.237 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.237 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.237 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.237 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.499 BaseBdev4 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.499 [ 00:11:52.499 { 00:11:52.499 "name": "BaseBdev4", 00:11:52.499 "aliases": [ 00:11:52.499 "4f89abc5-89a5-4b82-baa9-98c6a82b8638" 00:11:52.499 ], 00:11:52.499 "product_name": "Malloc disk", 00:11:52.499 "block_size": 512, 00:11:52.499 "num_blocks": 65536, 00:11:52.499 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:52.499 "assigned_rate_limits": { 00:11:52.499 "rw_ios_per_sec": 0, 00:11:52.499 "rw_mbytes_per_sec": 0, 00:11:52.499 "r_mbytes_per_sec": 0, 00:11:52.499 "w_mbytes_per_sec": 0 00:11:52.499 }, 00:11:52.499 "claimed": false, 00:11:52.499 "zoned": false, 00:11:52.499 "supported_io_types": { 00:11:52.499 "read": true, 00:11:52.499 "write": true, 00:11:52.499 "unmap": true, 00:11:52.499 "flush": true, 00:11:52.499 "reset": true, 00:11:52.499 "nvme_admin": false, 00:11:52.499 "nvme_io": false, 00:11:52.499 "nvme_io_md": false, 00:11:52.499 "write_zeroes": true, 00:11:52.499 "zcopy": true, 00:11:52.499 "get_zone_info": false, 00:11:52.499 "zone_management": false, 00:11:52.499 "zone_append": false, 00:11:52.499 "compare": false, 00:11:52.499 "compare_and_write": false, 00:11:52.499 "abort": true, 00:11:52.499 "seek_hole": false, 00:11:52.499 "seek_data": false, 00:11:52.499 "copy": true, 00:11:52.499 "nvme_iov_md": false 00:11:52.499 }, 00:11:52.499 "memory_domains": [ 00:11:52.499 { 00:11:52.499 "dma_device_id": "system", 00:11:52.499 "dma_device_type": 1 00:11:52.499 }, 00:11:52.499 { 00:11:52.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.499 "dma_device_type": 2 00:11:52.499 } 00:11:52.499 ], 00:11:52.499 "driver_specific": {} 00:11:52.499 } 00:11:52.499 ] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.499 [2024-11-20 17:46:19.515639] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.499 [2024-11-20 17:46:19.515795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.499 [2024-11-20 17:46:19.515856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.499 [2024-11-20 17:46:19.518208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.499 [2024-11-20 17:46:19.518272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.499 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.499 "name": "Existed_Raid", 00:11:52.499 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:52.499 "strip_size_kb": 64, 00:11:52.499 "state": "configuring", 00:11:52.499 "raid_level": "concat", 00:11:52.499 "superblock": true, 00:11:52.499 "num_base_bdevs": 4, 00:11:52.499 "num_base_bdevs_discovered": 3, 00:11:52.499 "num_base_bdevs_operational": 4, 00:11:52.499 "base_bdevs_list": [ 00:11:52.499 { 00:11:52.499 "name": "BaseBdev1", 00:11:52.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.499 "is_configured": false, 00:11:52.500 "data_offset": 0, 00:11:52.500 "data_size": 0 00:11:52.500 }, 00:11:52.500 { 00:11:52.500 "name": "BaseBdev2", 00:11:52.500 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:52.500 "is_configured": true, 00:11:52.500 "data_offset": 2048, 00:11:52.500 "data_size": 63488 00:11:52.500 }, 00:11:52.500 { 00:11:52.500 "name": "BaseBdev3", 00:11:52.500 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:52.500 "is_configured": true, 00:11:52.500 "data_offset": 2048, 00:11:52.500 "data_size": 63488 00:11:52.500 }, 00:11:52.500 { 00:11:52.500 "name": "BaseBdev4", 00:11:52.500 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:52.500 "is_configured": true, 00:11:52.500 "data_offset": 2048, 00:11:52.500 "data_size": 63488 00:11:52.500 } 00:11:52.500 ] 00:11:52.500 }' 00:11:52.500 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.500 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.760 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:52.760 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.760 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.020 [2024-11-20 17:46:19.939069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.020 "name": "Existed_Raid", 00:11:53.020 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:53.020 "strip_size_kb": 64, 00:11:53.020 "state": "configuring", 00:11:53.020 "raid_level": "concat", 00:11:53.020 "superblock": true, 00:11:53.020 "num_base_bdevs": 4, 00:11:53.020 "num_base_bdevs_discovered": 2, 00:11:53.020 "num_base_bdevs_operational": 4, 00:11:53.020 "base_bdevs_list": [ 00:11:53.020 { 00:11:53.020 "name": "BaseBdev1", 00:11:53.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.020 "is_configured": false, 00:11:53.020 "data_offset": 0, 00:11:53.020 "data_size": 0 00:11:53.020 }, 00:11:53.020 { 00:11:53.020 "name": null, 00:11:53.020 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:53.020 "is_configured": false, 00:11:53.020 "data_offset": 0, 00:11:53.020 "data_size": 63488 00:11:53.020 }, 00:11:53.020 { 00:11:53.020 "name": "BaseBdev3", 00:11:53.020 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:53.020 "is_configured": true, 00:11:53.020 "data_offset": 2048, 00:11:53.020 "data_size": 63488 00:11:53.020 }, 00:11:53.020 { 00:11:53.020 "name": "BaseBdev4", 00:11:53.020 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:53.020 "is_configured": true, 00:11:53.020 "data_offset": 2048, 00:11:53.020 "data_size": 63488 00:11:53.020 } 00:11:53.020 ] 00:11:53.020 }' 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.020 17:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.280 [2024-11-20 17:46:20.434506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.280 BaseBdev1 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.280 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.281 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.281 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.281 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.281 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:53.281 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.281 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.541 [ 00:11:53.541 { 00:11:53.541 "name": "BaseBdev1", 00:11:53.541 "aliases": [ 00:11:53.541 "c1d58e3a-0942-456a-a280-c692b177ca87" 00:11:53.541 ], 00:11:53.541 "product_name": "Malloc disk", 00:11:53.541 "block_size": 512, 00:11:53.541 "num_blocks": 65536, 00:11:53.541 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:53.541 "assigned_rate_limits": { 00:11:53.541 "rw_ios_per_sec": 0, 00:11:53.541 "rw_mbytes_per_sec": 0, 00:11:53.541 "r_mbytes_per_sec": 0, 00:11:53.541 "w_mbytes_per_sec": 0 00:11:53.541 }, 00:11:53.541 "claimed": true, 00:11:53.541 "claim_type": "exclusive_write", 00:11:53.541 "zoned": false, 00:11:53.541 "supported_io_types": { 00:11:53.541 "read": true, 00:11:53.541 "write": true, 00:11:53.541 "unmap": true, 00:11:53.541 "flush": true, 00:11:53.541 "reset": true, 00:11:53.541 "nvme_admin": false, 00:11:53.541 "nvme_io": false, 00:11:53.541 "nvme_io_md": false, 00:11:53.541 "write_zeroes": true, 00:11:53.541 "zcopy": true, 00:11:53.541 "get_zone_info": false, 00:11:53.541 "zone_management": false, 00:11:53.541 "zone_append": false, 00:11:53.541 "compare": false, 00:11:53.541 "compare_and_write": false, 00:11:53.541 "abort": true, 00:11:53.541 "seek_hole": false, 00:11:53.541 "seek_data": false, 00:11:53.541 "copy": true, 00:11:53.541 "nvme_iov_md": false 00:11:53.541 }, 00:11:53.541 "memory_domains": [ 00:11:53.541 { 00:11:53.541 "dma_device_id": "system", 00:11:53.541 "dma_device_type": 1 00:11:53.541 }, 00:11:53.541 { 00:11:53.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.541 "dma_device_type": 2 00:11:53.541 } 00:11:53.541 ], 00:11:53.541 "driver_specific": {} 00:11:53.541 } 00:11:53.541 ] 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.541 "name": "Existed_Raid", 00:11:53.541 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:53.541 "strip_size_kb": 64, 00:11:53.541 "state": "configuring", 00:11:53.541 "raid_level": "concat", 00:11:53.541 "superblock": true, 00:11:53.541 "num_base_bdevs": 4, 00:11:53.541 "num_base_bdevs_discovered": 3, 00:11:53.541 "num_base_bdevs_operational": 4, 00:11:53.541 "base_bdevs_list": [ 00:11:53.541 { 00:11:53.541 "name": "BaseBdev1", 00:11:53.541 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:53.541 "is_configured": true, 00:11:53.541 "data_offset": 2048, 00:11:53.541 "data_size": 63488 00:11:53.541 }, 00:11:53.541 { 00:11:53.541 "name": null, 00:11:53.541 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:53.541 "is_configured": false, 00:11:53.541 "data_offset": 0, 00:11:53.541 "data_size": 63488 00:11:53.541 }, 00:11:53.541 { 00:11:53.541 "name": "BaseBdev3", 00:11:53.541 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:53.541 "is_configured": true, 00:11:53.541 "data_offset": 2048, 00:11:53.541 "data_size": 63488 00:11:53.541 }, 00:11:53.541 { 00:11:53.541 "name": "BaseBdev4", 00:11:53.541 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:53.541 "is_configured": true, 00:11:53.541 "data_offset": 2048, 00:11:53.541 "data_size": 63488 00:11:53.541 } 00:11:53.541 ] 00:11:53.541 }' 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.541 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.800 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:53.800 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.800 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.800 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.800 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.060 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:54.060 17:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:54.060 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.060 17:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.060 [2024-11-20 17:46:21.005900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.060 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.061 "name": "Existed_Raid", 00:11:54.061 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:54.061 "strip_size_kb": 64, 00:11:54.061 "state": "configuring", 00:11:54.061 "raid_level": "concat", 00:11:54.061 "superblock": true, 00:11:54.061 "num_base_bdevs": 4, 00:11:54.061 "num_base_bdevs_discovered": 2, 00:11:54.061 "num_base_bdevs_operational": 4, 00:11:54.061 "base_bdevs_list": [ 00:11:54.061 { 00:11:54.061 "name": "BaseBdev1", 00:11:54.061 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:54.061 "is_configured": true, 00:11:54.061 "data_offset": 2048, 00:11:54.061 "data_size": 63488 00:11:54.061 }, 00:11:54.061 { 00:11:54.061 "name": null, 00:11:54.061 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:54.061 "is_configured": false, 00:11:54.061 "data_offset": 0, 00:11:54.061 "data_size": 63488 00:11:54.061 }, 00:11:54.061 { 00:11:54.061 "name": null, 00:11:54.061 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:54.061 "is_configured": false, 00:11:54.061 "data_offset": 0, 00:11:54.061 "data_size": 63488 00:11:54.061 }, 00:11:54.061 { 00:11:54.061 "name": "BaseBdev4", 00:11:54.061 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:54.061 "is_configured": true, 00:11:54.061 "data_offset": 2048, 00:11:54.061 "data_size": 63488 00:11:54.061 } 00:11:54.061 ] 00:11:54.061 }' 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.061 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.321 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.321 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.321 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.321 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.321 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.581 [2024-11-20 17:46:21.509044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.581 "name": "Existed_Raid", 00:11:54.581 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:54.581 "strip_size_kb": 64, 00:11:54.581 "state": "configuring", 00:11:54.581 "raid_level": "concat", 00:11:54.581 "superblock": true, 00:11:54.581 "num_base_bdevs": 4, 00:11:54.581 "num_base_bdevs_discovered": 3, 00:11:54.581 "num_base_bdevs_operational": 4, 00:11:54.581 "base_bdevs_list": [ 00:11:54.581 { 00:11:54.581 "name": "BaseBdev1", 00:11:54.581 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:54.581 "is_configured": true, 00:11:54.581 "data_offset": 2048, 00:11:54.581 "data_size": 63488 00:11:54.581 }, 00:11:54.581 { 00:11:54.581 "name": null, 00:11:54.581 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:54.581 "is_configured": false, 00:11:54.581 "data_offset": 0, 00:11:54.581 "data_size": 63488 00:11:54.581 }, 00:11:54.581 { 00:11:54.581 "name": "BaseBdev3", 00:11:54.581 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:54.581 "is_configured": true, 00:11:54.581 "data_offset": 2048, 00:11:54.581 "data_size": 63488 00:11:54.581 }, 00:11:54.581 { 00:11:54.581 "name": "BaseBdev4", 00:11:54.581 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:54.581 "is_configured": true, 00:11:54.581 "data_offset": 2048, 00:11:54.581 "data_size": 63488 00:11:54.581 } 00:11:54.581 ] 00:11:54.581 }' 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.581 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.840 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.841 17:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.841 [2024-11-20 17:46:21.984367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.101 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.101 "name": "Existed_Raid", 00:11:55.101 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:55.101 "strip_size_kb": 64, 00:11:55.101 "state": "configuring", 00:11:55.101 "raid_level": "concat", 00:11:55.101 "superblock": true, 00:11:55.101 "num_base_bdevs": 4, 00:11:55.101 "num_base_bdevs_discovered": 2, 00:11:55.101 "num_base_bdevs_operational": 4, 00:11:55.101 "base_bdevs_list": [ 00:11:55.101 { 00:11:55.101 "name": null, 00:11:55.101 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:55.101 "is_configured": false, 00:11:55.101 "data_offset": 0, 00:11:55.101 "data_size": 63488 00:11:55.101 }, 00:11:55.102 { 00:11:55.102 "name": null, 00:11:55.102 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:55.102 "is_configured": false, 00:11:55.102 "data_offset": 0, 00:11:55.102 "data_size": 63488 00:11:55.102 }, 00:11:55.102 { 00:11:55.102 "name": "BaseBdev3", 00:11:55.102 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:55.102 "is_configured": true, 00:11:55.102 "data_offset": 2048, 00:11:55.102 "data_size": 63488 00:11:55.102 }, 00:11:55.102 { 00:11:55.102 "name": "BaseBdev4", 00:11:55.102 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:55.102 "is_configured": true, 00:11:55.102 "data_offset": 2048, 00:11:55.102 "data_size": 63488 00:11:55.102 } 00:11:55.102 ] 00:11:55.102 }' 00:11:55.102 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.102 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.671 [2024-11-20 17:46:22.601214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.671 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.671 "name": "Existed_Raid", 00:11:55.671 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:55.671 "strip_size_kb": 64, 00:11:55.671 "state": "configuring", 00:11:55.671 "raid_level": "concat", 00:11:55.671 "superblock": true, 00:11:55.671 "num_base_bdevs": 4, 00:11:55.671 "num_base_bdevs_discovered": 3, 00:11:55.672 "num_base_bdevs_operational": 4, 00:11:55.672 "base_bdevs_list": [ 00:11:55.672 { 00:11:55.672 "name": null, 00:11:55.672 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:55.672 "is_configured": false, 00:11:55.672 "data_offset": 0, 00:11:55.672 "data_size": 63488 00:11:55.672 }, 00:11:55.672 { 00:11:55.672 "name": "BaseBdev2", 00:11:55.672 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:55.672 "is_configured": true, 00:11:55.672 "data_offset": 2048, 00:11:55.672 "data_size": 63488 00:11:55.672 }, 00:11:55.672 { 00:11:55.672 "name": "BaseBdev3", 00:11:55.672 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:55.672 "is_configured": true, 00:11:55.672 "data_offset": 2048, 00:11:55.672 "data_size": 63488 00:11:55.672 }, 00:11:55.672 { 00:11:55.672 "name": "BaseBdev4", 00:11:55.672 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:55.672 "is_configured": true, 00:11:55.672 "data_offset": 2048, 00:11:55.672 "data_size": 63488 00:11:55.672 } 00:11:55.672 ] 00:11:55.672 }' 00:11:55.672 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.672 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.954 17:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.954 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.954 17:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1d58e3a-0942-456a-a280-c692b177ca87 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 [2024-11-20 17:46:23.109357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:55.954 [2024-11-20 17:46:23.109730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:55.954 [2024-11-20 17:46:23.109781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.954 [2024-11-20 17:46:23.110134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:55.954 [2024-11-20 17:46:23.110325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:55.954 [2024-11-20 17:46:23.110369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:55.954 NewBaseBdev 00:11:55.954 [2024-11-20 17:46:23.110549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.954 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.247 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.247 [ 00:11:56.247 { 00:11:56.247 "name": "NewBaseBdev", 00:11:56.248 "aliases": [ 00:11:56.248 "c1d58e3a-0942-456a-a280-c692b177ca87" 00:11:56.248 ], 00:11:56.248 "product_name": "Malloc disk", 00:11:56.248 "block_size": 512, 00:11:56.248 "num_blocks": 65536, 00:11:56.248 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:56.248 "assigned_rate_limits": { 00:11:56.248 "rw_ios_per_sec": 0, 00:11:56.248 "rw_mbytes_per_sec": 0, 00:11:56.248 "r_mbytes_per_sec": 0, 00:11:56.248 "w_mbytes_per_sec": 0 00:11:56.248 }, 00:11:56.248 "claimed": true, 00:11:56.248 "claim_type": "exclusive_write", 00:11:56.248 "zoned": false, 00:11:56.248 "supported_io_types": { 00:11:56.248 "read": true, 00:11:56.248 "write": true, 00:11:56.248 "unmap": true, 00:11:56.248 "flush": true, 00:11:56.248 "reset": true, 00:11:56.248 "nvme_admin": false, 00:11:56.248 "nvme_io": false, 00:11:56.248 "nvme_io_md": false, 00:11:56.248 "write_zeroes": true, 00:11:56.248 "zcopy": true, 00:11:56.248 "get_zone_info": false, 00:11:56.248 "zone_management": false, 00:11:56.248 "zone_append": false, 00:11:56.248 "compare": false, 00:11:56.248 "compare_and_write": false, 00:11:56.248 "abort": true, 00:11:56.248 "seek_hole": false, 00:11:56.248 "seek_data": false, 00:11:56.248 "copy": true, 00:11:56.248 "nvme_iov_md": false 00:11:56.248 }, 00:11:56.248 "memory_domains": [ 00:11:56.248 { 00:11:56.248 "dma_device_id": "system", 00:11:56.248 "dma_device_type": 1 00:11:56.248 }, 00:11:56.248 { 00:11:56.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.248 "dma_device_type": 2 00:11:56.248 } 00:11:56.248 ], 00:11:56.248 "driver_specific": {} 00:11:56.248 } 00:11:56.248 ] 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.248 "name": "Existed_Raid", 00:11:56.248 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:56.248 "strip_size_kb": 64, 00:11:56.248 "state": "online", 00:11:56.248 "raid_level": "concat", 00:11:56.248 "superblock": true, 00:11:56.248 "num_base_bdevs": 4, 00:11:56.248 "num_base_bdevs_discovered": 4, 00:11:56.248 "num_base_bdevs_operational": 4, 00:11:56.248 "base_bdevs_list": [ 00:11:56.248 { 00:11:56.248 "name": "NewBaseBdev", 00:11:56.248 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:56.248 "is_configured": true, 00:11:56.248 "data_offset": 2048, 00:11:56.248 "data_size": 63488 00:11:56.248 }, 00:11:56.248 { 00:11:56.248 "name": "BaseBdev2", 00:11:56.248 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:56.248 "is_configured": true, 00:11:56.248 "data_offset": 2048, 00:11:56.248 "data_size": 63488 00:11:56.248 }, 00:11:56.248 { 00:11:56.248 "name": "BaseBdev3", 00:11:56.248 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:56.248 "is_configured": true, 00:11:56.248 "data_offset": 2048, 00:11:56.248 "data_size": 63488 00:11:56.248 }, 00:11:56.248 { 00:11:56.248 "name": "BaseBdev4", 00:11:56.248 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:56.248 "is_configured": true, 00:11:56.248 "data_offset": 2048, 00:11:56.248 "data_size": 63488 00:11:56.248 } 00:11:56.248 ] 00:11:56.248 }' 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.248 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.508 [2024-11-20 17:46:23.613075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.508 "name": "Existed_Raid", 00:11:56.508 "aliases": [ 00:11:56.508 "17ce81d4-4735-478f-bd9d-a25759ba3dc9" 00:11:56.508 ], 00:11:56.508 "product_name": "Raid Volume", 00:11:56.508 "block_size": 512, 00:11:56.508 "num_blocks": 253952, 00:11:56.508 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:56.508 "assigned_rate_limits": { 00:11:56.508 "rw_ios_per_sec": 0, 00:11:56.508 "rw_mbytes_per_sec": 0, 00:11:56.508 "r_mbytes_per_sec": 0, 00:11:56.508 "w_mbytes_per_sec": 0 00:11:56.508 }, 00:11:56.508 "claimed": false, 00:11:56.508 "zoned": false, 00:11:56.508 "supported_io_types": { 00:11:56.508 "read": true, 00:11:56.508 "write": true, 00:11:56.508 "unmap": true, 00:11:56.508 "flush": true, 00:11:56.508 "reset": true, 00:11:56.508 "nvme_admin": false, 00:11:56.508 "nvme_io": false, 00:11:56.508 "nvme_io_md": false, 00:11:56.508 "write_zeroes": true, 00:11:56.508 "zcopy": false, 00:11:56.508 "get_zone_info": false, 00:11:56.508 "zone_management": false, 00:11:56.508 "zone_append": false, 00:11:56.508 "compare": false, 00:11:56.508 "compare_and_write": false, 00:11:56.508 "abort": false, 00:11:56.508 "seek_hole": false, 00:11:56.508 "seek_data": false, 00:11:56.508 "copy": false, 00:11:56.508 "nvme_iov_md": false 00:11:56.508 }, 00:11:56.508 "memory_domains": [ 00:11:56.508 { 00:11:56.508 "dma_device_id": "system", 00:11:56.508 "dma_device_type": 1 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.508 "dma_device_type": 2 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "system", 00:11:56.508 "dma_device_type": 1 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.508 "dma_device_type": 2 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "system", 00:11:56.508 "dma_device_type": 1 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.508 "dma_device_type": 2 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "system", 00:11:56.508 "dma_device_type": 1 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.508 "dma_device_type": 2 00:11:56.508 } 00:11:56.508 ], 00:11:56.508 "driver_specific": { 00:11:56.508 "raid": { 00:11:56.508 "uuid": "17ce81d4-4735-478f-bd9d-a25759ba3dc9", 00:11:56.508 "strip_size_kb": 64, 00:11:56.508 "state": "online", 00:11:56.508 "raid_level": "concat", 00:11:56.508 "superblock": true, 00:11:56.508 "num_base_bdevs": 4, 00:11:56.508 "num_base_bdevs_discovered": 4, 00:11:56.508 "num_base_bdevs_operational": 4, 00:11:56.508 "base_bdevs_list": [ 00:11:56.508 { 00:11:56.508 "name": "NewBaseBdev", 00:11:56.508 "uuid": "c1d58e3a-0942-456a-a280-c692b177ca87", 00:11:56.508 "is_configured": true, 00:11:56.508 "data_offset": 2048, 00:11:56.508 "data_size": 63488 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "name": "BaseBdev2", 00:11:56.508 "uuid": "dbcd2e8a-244f-4ee6-ad2e-5b022c54ba61", 00:11:56.508 "is_configured": true, 00:11:56.508 "data_offset": 2048, 00:11:56.508 "data_size": 63488 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "name": "BaseBdev3", 00:11:56.508 "uuid": "050529bd-5645-4f63-a780-d3a6671874ce", 00:11:56.508 "is_configured": true, 00:11:56.508 "data_offset": 2048, 00:11:56.508 "data_size": 63488 00:11:56.508 }, 00:11:56.508 { 00:11:56.508 "name": "BaseBdev4", 00:11:56.508 "uuid": "4f89abc5-89a5-4b82-baa9-98c6a82b8638", 00:11:56.508 "is_configured": true, 00:11:56.508 "data_offset": 2048, 00:11:56.508 "data_size": 63488 00:11:56.508 } 00:11:56.508 ] 00:11:56.508 } 00:11:56.508 } 00:11:56.508 }' 00:11:56.508 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:56.768 BaseBdev2 00:11:56.768 BaseBdev3 00:11:56.768 BaseBdev4' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.768 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 [2024-11-20 17:46:23.955973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.027 [2024-11-20 17:46:23.956121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.027 [2024-11-20 17:46:23.956227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.027 [2024-11-20 17:46:23.956318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.027 [2024-11-20 17:46:23.956330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72386 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72386 ']' 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72386 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72386 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72386' 00:11:57.027 killing process with pid 72386 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72386 00:11:57.027 [2024-11-20 17:46:23.988228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.027 17:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72386 00:11:57.597 [2024-11-20 17:46:24.480926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.978 17:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:58.978 00:11:58.978 real 0m12.148s 00:11:58.978 user 0m18.856s 00:11:58.978 sys 0m2.229s 00:11:58.978 ************************************ 00:11:58.978 END TEST raid_state_function_test_sb 00:11:58.978 ************************************ 00:11:58.978 17:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.978 17:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.978 17:46:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:58.978 17:46:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:58.978 17:46:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.978 17:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.978 ************************************ 00:11:58.978 START TEST raid_superblock_test 00:11:58.978 ************************************ 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73065 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73065 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73065 ']' 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.978 17:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.978 [2024-11-20 17:46:25.993064] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:58.978 [2024-11-20 17:46:25.993265] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73065 ] 00:11:59.237 [2024-11-20 17:46:26.164132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.237 [2024-11-20 17:46:26.316647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.496 [2024-11-20 17:46:26.558457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.496 [2024-11-20 17:46:26.558611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.756 malloc1 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.756 [2024-11-20 17:46:26.881299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:59.756 [2024-11-20 17:46:26.881478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.756 [2024-11-20 17:46:26.881511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:59.756 [2024-11-20 17:46:26.881522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.756 [2024-11-20 17:46:26.884108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.756 [2024-11-20 17:46:26.884148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:59.756 pt1 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.756 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.039 malloc2 00:12:00.039 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.039 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.039 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.039 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.039 [2024-11-20 17:46:26.942172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.039 [2024-11-20 17:46:26.942234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.039 [2024-11-20 17:46:26.942263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:00.039 [2024-11-20 17:46:26.942273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.039 [2024-11-20 17:46:26.944652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.039 [2024-11-20 17:46:26.944771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.039 pt2 00:12:00.039 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.039 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.040 17:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.040 malloc3 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.040 [2024-11-20 17:46:27.016630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:00.040 [2024-11-20 17:46:27.016711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.040 [2024-11-20 17:46:27.016738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:00.040 [2024-11-20 17:46:27.016748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.040 [2024-11-20 17:46:27.019397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.040 [2024-11-20 17:46:27.019475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:00.040 pt3 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.040 malloc4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.040 [2024-11-20 17:46:27.080812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:00.040 [2024-11-20 17:46:27.080891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.040 [2024-11-20 17:46:27.080914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:00.040 [2024-11-20 17:46:27.080924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.040 [2024-11-20 17:46:27.083299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.040 [2024-11-20 17:46:27.083335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:00.040 pt4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.040 [2024-11-20 17:46:27.092843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.040 [2024-11-20 17:46:27.094943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.040 [2024-11-20 17:46:27.095173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:00.040 [2024-11-20 17:46:27.095232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:00.040 [2024-11-20 17:46:27.095456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:00.040 [2024-11-20 17:46:27.095469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:00.040 [2024-11-20 17:46:27.095737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:00.040 [2024-11-20 17:46:27.095904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:00.040 [2024-11-20 17:46:27.095917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:00.040 [2024-11-20 17:46:27.096086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.040 "name": "raid_bdev1", 00:12:00.040 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:00.040 "strip_size_kb": 64, 00:12:00.040 "state": "online", 00:12:00.040 "raid_level": "concat", 00:12:00.040 "superblock": true, 00:12:00.040 "num_base_bdevs": 4, 00:12:00.040 "num_base_bdevs_discovered": 4, 00:12:00.040 "num_base_bdevs_operational": 4, 00:12:00.040 "base_bdevs_list": [ 00:12:00.040 { 00:12:00.040 "name": "pt1", 00:12:00.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.040 "is_configured": true, 00:12:00.040 "data_offset": 2048, 00:12:00.040 "data_size": 63488 00:12:00.040 }, 00:12:00.040 { 00:12:00.040 "name": "pt2", 00:12:00.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.040 "is_configured": true, 00:12:00.040 "data_offset": 2048, 00:12:00.040 "data_size": 63488 00:12:00.040 }, 00:12:00.040 { 00:12:00.040 "name": "pt3", 00:12:00.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.040 "is_configured": true, 00:12:00.040 "data_offset": 2048, 00:12:00.040 "data_size": 63488 00:12:00.040 }, 00:12:00.040 { 00:12:00.040 "name": "pt4", 00:12:00.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.040 "is_configured": true, 00:12:00.040 "data_offset": 2048, 00:12:00.040 "data_size": 63488 00:12:00.040 } 00:12:00.040 ] 00:12:00.040 }' 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.040 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 [2024-11-20 17:46:27.552414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.611 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.611 "name": "raid_bdev1", 00:12:00.611 "aliases": [ 00:12:00.611 "47dbabb0-a01b-4887-b677-13e4269cd364" 00:12:00.611 ], 00:12:00.611 "product_name": "Raid Volume", 00:12:00.611 "block_size": 512, 00:12:00.611 "num_blocks": 253952, 00:12:00.611 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:00.611 "assigned_rate_limits": { 00:12:00.611 "rw_ios_per_sec": 0, 00:12:00.611 "rw_mbytes_per_sec": 0, 00:12:00.611 "r_mbytes_per_sec": 0, 00:12:00.611 "w_mbytes_per_sec": 0 00:12:00.611 }, 00:12:00.611 "claimed": false, 00:12:00.611 "zoned": false, 00:12:00.611 "supported_io_types": { 00:12:00.611 "read": true, 00:12:00.611 "write": true, 00:12:00.611 "unmap": true, 00:12:00.611 "flush": true, 00:12:00.611 "reset": true, 00:12:00.611 "nvme_admin": false, 00:12:00.611 "nvme_io": false, 00:12:00.611 "nvme_io_md": false, 00:12:00.611 "write_zeroes": true, 00:12:00.611 "zcopy": false, 00:12:00.611 "get_zone_info": false, 00:12:00.611 "zone_management": false, 00:12:00.611 "zone_append": false, 00:12:00.611 "compare": false, 00:12:00.611 "compare_and_write": false, 00:12:00.611 "abort": false, 00:12:00.611 "seek_hole": false, 00:12:00.611 "seek_data": false, 00:12:00.611 "copy": false, 00:12:00.611 "nvme_iov_md": false 00:12:00.611 }, 00:12:00.611 "memory_domains": [ 00:12:00.611 { 00:12:00.611 "dma_device_id": "system", 00:12:00.611 "dma_device_type": 1 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.611 "dma_device_type": 2 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "system", 00:12:00.611 "dma_device_type": 1 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.611 "dma_device_type": 2 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "system", 00:12:00.611 "dma_device_type": 1 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.611 "dma_device_type": 2 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "system", 00:12:00.611 "dma_device_type": 1 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.611 "dma_device_type": 2 00:12:00.611 } 00:12:00.611 ], 00:12:00.611 "driver_specific": { 00:12:00.611 "raid": { 00:12:00.611 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:00.611 "strip_size_kb": 64, 00:12:00.611 "state": "online", 00:12:00.611 "raid_level": "concat", 00:12:00.611 "superblock": true, 00:12:00.611 "num_base_bdevs": 4, 00:12:00.611 "num_base_bdevs_discovered": 4, 00:12:00.611 "num_base_bdevs_operational": 4, 00:12:00.611 "base_bdevs_list": [ 00:12:00.611 { 00:12:00.611 "name": "pt1", 00:12:00.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.611 "is_configured": true, 00:12:00.611 "data_offset": 2048, 00:12:00.611 "data_size": 63488 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "name": "pt2", 00:12:00.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.611 "is_configured": true, 00:12:00.611 "data_offset": 2048, 00:12:00.611 "data_size": 63488 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "name": "pt3", 00:12:00.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.611 "is_configured": true, 00:12:00.611 "data_offset": 2048, 00:12:00.611 "data_size": 63488 00:12:00.612 }, 00:12:00.612 { 00:12:00.612 "name": "pt4", 00:12:00.612 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.612 "is_configured": true, 00:12:00.612 "data_offset": 2048, 00:12:00.612 "data_size": 63488 00:12:00.612 } 00:12:00.612 ] 00:12:00.612 } 00:12:00.612 } 00:12:00.612 }' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:00.612 pt2 00:12:00.612 pt3 00:12:00.612 pt4' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.612 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:00.872 [2024-11-20 17:46:27.871904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=47dbabb0-a01b-4887-b677-13e4269cd364 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 47dbabb0-a01b-4887-b677-13e4269cd364 ']' 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.872 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.873 [2024-11-20 17:46:27.915442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.873 [2024-11-20 17:46:27.915491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.873 [2024-11-20 17:46:27.915616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.873 [2024-11-20 17:46:27.915705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.873 [2024-11-20 17:46:27.915721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.873 17:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.873 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.133 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.134 [2024-11-20 17:46:28.075228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:01.134 [2024-11-20 17:46:28.077645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:01.134 [2024-11-20 17:46:28.077702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:01.134 [2024-11-20 17:46:28.077738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:01.134 [2024-11-20 17:46:28.077798] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:01.134 [2024-11-20 17:46:28.077864] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:01.134 [2024-11-20 17:46:28.077892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:01.134 [2024-11-20 17:46:28.077911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:01.134 [2024-11-20 17:46:28.077926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.134 [2024-11-20 17:46:28.077939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:01.134 request: 00:12:01.134 { 00:12:01.134 "name": "raid_bdev1", 00:12:01.134 "raid_level": "concat", 00:12:01.134 "base_bdevs": [ 00:12:01.134 "malloc1", 00:12:01.134 "malloc2", 00:12:01.134 "malloc3", 00:12:01.134 "malloc4" 00:12:01.134 ], 00:12:01.134 "strip_size_kb": 64, 00:12:01.134 "superblock": false, 00:12:01.134 "method": "bdev_raid_create", 00:12:01.134 "req_id": 1 00:12:01.134 } 00:12:01.134 Got JSON-RPC error response 00:12:01.134 response: 00:12:01.134 { 00:12:01.134 "code": -17, 00:12:01.134 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:01.134 } 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.134 [2024-11-20 17:46:28.134991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:01.134 [2024-11-20 17:46:28.135123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.134 [2024-11-20 17:46:28.135162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:01.134 [2024-11-20 17:46:28.135198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.134 [2024-11-20 17:46:28.137707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.134 [2024-11-20 17:46:28.137786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:01.134 [2024-11-20 17:46:28.137900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:01.134 [2024-11-20 17:46:28.137991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:01.134 pt1 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.134 "name": "raid_bdev1", 00:12:01.134 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:01.134 "strip_size_kb": 64, 00:12:01.134 "state": "configuring", 00:12:01.134 "raid_level": "concat", 00:12:01.134 "superblock": true, 00:12:01.134 "num_base_bdevs": 4, 00:12:01.134 "num_base_bdevs_discovered": 1, 00:12:01.134 "num_base_bdevs_operational": 4, 00:12:01.134 "base_bdevs_list": [ 00:12:01.134 { 00:12:01.134 "name": "pt1", 00:12:01.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.134 "is_configured": true, 00:12:01.134 "data_offset": 2048, 00:12:01.134 "data_size": 63488 00:12:01.134 }, 00:12:01.134 { 00:12:01.134 "name": null, 00:12:01.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.134 "is_configured": false, 00:12:01.134 "data_offset": 2048, 00:12:01.134 "data_size": 63488 00:12:01.134 }, 00:12:01.134 { 00:12:01.134 "name": null, 00:12:01.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.134 "is_configured": false, 00:12:01.134 "data_offset": 2048, 00:12:01.134 "data_size": 63488 00:12:01.134 }, 00:12:01.134 { 00:12:01.134 "name": null, 00:12:01.134 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.134 "is_configured": false, 00:12:01.134 "data_offset": 2048, 00:12:01.134 "data_size": 63488 00:12:01.134 } 00:12:01.134 ] 00:12:01.134 }' 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.134 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.705 [2024-11-20 17:46:28.634201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.705 [2024-11-20 17:46:28.634403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.705 [2024-11-20 17:46:28.634431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:01.705 [2024-11-20 17:46:28.634443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.705 [2024-11-20 17:46:28.634950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.705 [2024-11-20 17:46:28.634971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.705 [2024-11-20 17:46:28.635083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:01.705 [2024-11-20 17:46:28.635114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.705 pt2 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.705 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.705 [2024-11-20 17:46:28.646160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.706 "name": "raid_bdev1", 00:12:01.706 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:01.706 "strip_size_kb": 64, 00:12:01.706 "state": "configuring", 00:12:01.706 "raid_level": "concat", 00:12:01.706 "superblock": true, 00:12:01.706 "num_base_bdevs": 4, 00:12:01.706 "num_base_bdevs_discovered": 1, 00:12:01.706 "num_base_bdevs_operational": 4, 00:12:01.706 "base_bdevs_list": [ 00:12:01.706 { 00:12:01.706 "name": "pt1", 00:12:01.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.706 "is_configured": true, 00:12:01.706 "data_offset": 2048, 00:12:01.706 "data_size": 63488 00:12:01.706 }, 00:12:01.706 { 00:12:01.706 "name": null, 00:12:01.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.706 "is_configured": false, 00:12:01.706 "data_offset": 0, 00:12:01.706 "data_size": 63488 00:12:01.706 }, 00:12:01.706 { 00:12:01.706 "name": null, 00:12:01.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.706 "is_configured": false, 00:12:01.706 "data_offset": 2048, 00:12:01.706 "data_size": 63488 00:12:01.706 }, 00:12:01.706 { 00:12:01.706 "name": null, 00:12:01.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.706 "is_configured": false, 00:12:01.706 "data_offset": 2048, 00:12:01.706 "data_size": 63488 00:12:01.706 } 00:12:01.706 ] 00:12:01.706 }' 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.706 17:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.966 [2024-11-20 17:46:29.065455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.966 [2024-11-20 17:46:29.065637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.966 [2024-11-20 17:46:29.065678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:01.966 [2024-11-20 17:46:29.065706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.966 [2024-11-20 17:46:29.066248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.966 [2024-11-20 17:46:29.066313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.966 [2024-11-20 17:46:29.066451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:01.966 [2024-11-20 17:46:29.066502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.966 pt2 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.966 [2024-11-20 17:46:29.077370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.966 [2024-11-20 17:46:29.077480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.966 [2024-11-20 17:46:29.077516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:01.966 [2024-11-20 17:46:29.077544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.966 [2024-11-20 17:46:29.077980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.966 [2024-11-20 17:46:29.078062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.966 [2024-11-20 17:46:29.078156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:01.966 [2024-11-20 17:46:29.078214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.966 pt3 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.966 [2024-11-20 17:46:29.089312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:01.966 [2024-11-20 17:46:29.089357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.966 [2024-11-20 17:46:29.089374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:01.966 [2024-11-20 17:46:29.089383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.966 [2024-11-20 17:46:29.089799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.966 [2024-11-20 17:46:29.089815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:01.966 [2024-11-20 17:46:29.089875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:01.966 [2024-11-20 17:46:29.089898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:01.966 [2024-11-20 17:46:29.090056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:01.966 [2024-11-20 17:46:29.090066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:01.966 [2024-11-20 17:46:29.090331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:01.966 [2024-11-20 17:46:29.090498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:01.966 [2024-11-20 17:46:29.090511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:01.966 [2024-11-20 17:46:29.090631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.966 pt4 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.966 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.225 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.225 "name": "raid_bdev1", 00:12:02.225 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:02.225 "strip_size_kb": 64, 00:12:02.225 "state": "online", 00:12:02.225 "raid_level": "concat", 00:12:02.225 "superblock": true, 00:12:02.225 "num_base_bdevs": 4, 00:12:02.225 "num_base_bdevs_discovered": 4, 00:12:02.225 "num_base_bdevs_operational": 4, 00:12:02.225 "base_bdevs_list": [ 00:12:02.225 { 00:12:02.225 "name": "pt1", 00:12:02.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.225 "is_configured": true, 00:12:02.225 "data_offset": 2048, 00:12:02.225 "data_size": 63488 00:12:02.225 }, 00:12:02.225 { 00:12:02.225 "name": "pt2", 00:12:02.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.225 "is_configured": true, 00:12:02.225 "data_offset": 2048, 00:12:02.225 "data_size": 63488 00:12:02.225 }, 00:12:02.225 { 00:12:02.225 "name": "pt3", 00:12:02.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.225 "is_configured": true, 00:12:02.225 "data_offset": 2048, 00:12:02.225 "data_size": 63488 00:12:02.225 }, 00:12:02.225 { 00:12:02.225 "name": "pt4", 00:12:02.225 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.225 "is_configured": true, 00:12:02.225 "data_offset": 2048, 00:12:02.225 "data_size": 63488 00:12:02.225 } 00:12:02.225 ] 00:12:02.225 }' 00:12:02.225 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.225 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.484 [2024-11-20 17:46:29.588914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.484 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.484 "name": "raid_bdev1", 00:12:02.484 "aliases": [ 00:12:02.484 "47dbabb0-a01b-4887-b677-13e4269cd364" 00:12:02.484 ], 00:12:02.484 "product_name": "Raid Volume", 00:12:02.484 "block_size": 512, 00:12:02.484 "num_blocks": 253952, 00:12:02.484 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:02.484 "assigned_rate_limits": { 00:12:02.484 "rw_ios_per_sec": 0, 00:12:02.484 "rw_mbytes_per_sec": 0, 00:12:02.484 "r_mbytes_per_sec": 0, 00:12:02.484 "w_mbytes_per_sec": 0 00:12:02.484 }, 00:12:02.484 "claimed": false, 00:12:02.484 "zoned": false, 00:12:02.484 "supported_io_types": { 00:12:02.484 "read": true, 00:12:02.485 "write": true, 00:12:02.485 "unmap": true, 00:12:02.485 "flush": true, 00:12:02.485 "reset": true, 00:12:02.485 "nvme_admin": false, 00:12:02.485 "nvme_io": false, 00:12:02.485 "nvme_io_md": false, 00:12:02.485 "write_zeroes": true, 00:12:02.485 "zcopy": false, 00:12:02.485 "get_zone_info": false, 00:12:02.485 "zone_management": false, 00:12:02.485 "zone_append": false, 00:12:02.485 "compare": false, 00:12:02.485 "compare_and_write": false, 00:12:02.485 "abort": false, 00:12:02.485 "seek_hole": false, 00:12:02.485 "seek_data": false, 00:12:02.485 "copy": false, 00:12:02.485 "nvme_iov_md": false 00:12:02.485 }, 00:12:02.485 "memory_domains": [ 00:12:02.485 { 00:12:02.485 "dma_device_id": "system", 00:12:02.485 "dma_device_type": 1 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.485 "dma_device_type": 2 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "system", 00:12:02.485 "dma_device_type": 1 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.485 "dma_device_type": 2 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "system", 00:12:02.485 "dma_device_type": 1 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.485 "dma_device_type": 2 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "system", 00:12:02.485 "dma_device_type": 1 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.485 "dma_device_type": 2 00:12:02.485 } 00:12:02.485 ], 00:12:02.485 "driver_specific": { 00:12:02.485 "raid": { 00:12:02.485 "uuid": "47dbabb0-a01b-4887-b677-13e4269cd364", 00:12:02.485 "strip_size_kb": 64, 00:12:02.485 "state": "online", 00:12:02.485 "raid_level": "concat", 00:12:02.485 "superblock": true, 00:12:02.485 "num_base_bdevs": 4, 00:12:02.485 "num_base_bdevs_discovered": 4, 00:12:02.485 "num_base_bdevs_operational": 4, 00:12:02.485 "base_bdevs_list": [ 00:12:02.485 { 00:12:02.485 "name": "pt1", 00:12:02.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.485 "is_configured": true, 00:12:02.485 "data_offset": 2048, 00:12:02.485 "data_size": 63488 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "name": "pt2", 00:12:02.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.485 "is_configured": true, 00:12:02.485 "data_offset": 2048, 00:12:02.485 "data_size": 63488 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "name": "pt3", 00:12:02.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.485 "is_configured": true, 00:12:02.485 "data_offset": 2048, 00:12:02.485 "data_size": 63488 00:12:02.485 }, 00:12:02.485 { 00:12:02.485 "name": "pt4", 00:12:02.485 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.485 "is_configured": true, 00:12:02.485 "data_offset": 2048, 00:12:02.485 "data_size": 63488 00:12:02.485 } 00:12:02.485 ] 00:12:02.485 } 00:12:02.485 } 00:12:02.485 }' 00:12:02.485 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:02.744 pt2 00:12:02.744 pt3 00:12:02.744 pt4' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.744 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.744 [2024-11-20 17:46:29.912237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 47dbabb0-a01b-4887-b677-13e4269cd364 '!=' 47dbabb0-a01b-4887-b677-13e4269cd364 ']' 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73065 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73065 ']' 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73065 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73065 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73065' 00:12:03.004 killing process with pid 73065 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73065 00:12:03.004 [2024-11-20 17:46:29.993634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.004 17:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73065 00:12:03.004 [2024-11-20 17:46:29.993825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.005 [2024-11-20 17:46:29.993945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.005 [2024-11-20 17:46:29.993989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:03.264 [2024-11-20 17:46:30.436976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.643 17:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:04.643 00:12:04.643 real 0m5.798s 00:12:04.643 user 0m8.121s 00:12:04.643 sys 0m1.034s 00:12:04.643 17:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.643 17:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.643 ************************************ 00:12:04.643 END TEST raid_superblock_test 00:12:04.643 ************************************ 00:12:04.643 17:46:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:04.643 17:46:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:04.643 17:46:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.643 17:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.643 ************************************ 00:12:04.643 START TEST raid_read_error_test 00:12:04.643 ************************************ 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ALPE2it7hf 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73324 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73324 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73324 ']' 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.643 17:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.904 [2024-11-20 17:46:31.884482] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:04.904 [2024-11-20 17:46:31.884721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:12:04.904 [2024-11-20 17:46:32.065205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.163 [2024-11-20 17:46:32.219528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.422 [2024-11-20 17:46:32.482525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.422 [2024-11-20 17:46:32.482670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.682 BaseBdev1_malloc 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.682 true 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.682 [2024-11-20 17:46:32.780392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.682 [2024-11-20 17:46:32.780474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.682 [2024-11-20 17:46:32.780501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.682 [2024-11-20 17:46:32.780515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.682 [2024-11-20 17:46:32.783202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.682 [2024-11-20 17:46:32.783245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.682 BaseBdev1 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.682 BaseBdev2_malloc 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.682 true 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.682 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.682 [2024-11-20 17:46:32.855745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.682 [2024-11-20 17:46:32.855909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.682 [2024-11-20 17:46:32.855933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.682 [2024-11-20 17:46:32.855946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.942 [2024-11-20 17:46:32.858560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.942 [2024-11-20 17:46:32.858601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.942 BaseBdev2 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.942 BaseBdev3_malloc 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.942 true 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.942 [2024-11-20 17:46:32.944932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:05.942 [2024-11-20 17:46:32.945027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.942 [2024-11-20 17:46:32.945051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:05.942 [2024-11-20 17:46:32.945066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.942 [2024-11-20 17:46:32.947741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.942 [2024-11-20 17:46:32.947786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:05.942 BaseBdev3 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:05.942 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.943 17:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 BaseBdev4_malloc 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 true 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 [2024-11-20 17:46:33.023781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:05.943 [2024-11-20 17:46:33.023858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.943 [2024-11-20 17:46:33.023880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.943 [2024-11-20 17:46:33.023892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.943 [2024-11-20 17:46:33.026540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.943 [2024-11-20 17:46:33.026586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:05.943 BaseBdev4 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 [2024-11-20 17:46:33.035858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.943 [2024-11-20 17:46:33.038271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.943 [2024-11-20 17:46:33.038357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.943 [2024-11-20 17:46:33.038427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.943 [2024-11-20 17:46:33.038689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:05.943 [2024-11-20 17:46:33.038712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:05.943 [2024-11-20 17:46:33.039004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:05.943 [2024-11-20 17:46:33.039230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:05.943 [2024-11-20 17:46:33.039250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:05.943 [2024-11-20 17:46:33.039431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.943 "name": "raid_bdev1", 00:12:05.943 "uuid": "d6d57a42-0d5a-44e2-81ac-85d11ea9a100", 00:12:05.943 "strip_size_kb": 64, 00:12:05.943 "state": "online", 00:12:05.943 "raid_level": "concat", 00:12:05.943 "superblock": true, 00:12:05.943 "num_base_bdevs": 4, 00:12:05.943 "num_base_bdevs_discovered": 4, 00:12:05.943 "num_base_bdevs_operational": 4, 00:12:05.943 "base_bdevs_list": [ 00:12:05.943 { 00:12:05.943 "name": "BaseBdev1", 00:12:05.943 "uuid": "c5ab31b3-cc3f-5203-9dd5-43ddd9ae363d", 00:12:05.943 "is_configured": true, 00:12:05.943 "data_offset": 2048, 00:12:05.943 "data_size": 63488 00:12:05.943 }, 00:12:05.943 { 00:12:05.943 "name": "BaseBdev2", 00:12:05.943 "uuid": "3a0bc7a0-7afe-59e7-9dc0-def5c786ae04", 00:12:05.943 "is_configured": true, 00:12:05.943 "data_offset": 2048, 00:12:05.943 "data_size": 63488 00:12:05.943 }, 00:12:05.943 { 00:12:05.943 "name": "BaseBdev3", 00:12:05.943 "uuid": "e9b1c44f-c769-551d-a401-ac813e4a9c4b", 00:12:05.943 "is_configured": true, 00:12:05.943 "data_offset": 2048, 00:12:05.943 "data_size": 63488 00:12:05.943 }, 00:12:05.943 { 00:12:05.943 "name": "BaseBdev4", 00:12:05.943 "uuid": "14398375-b9b5-505b-88d2-06b4d1040f0f", 00:12:05.943 "is_configured": true, 00:12:05.943 "data_offset": 2048, 00:12:05.943 "data_size": 63488 00:12:05.943 } 00:12:05.943 ] 00:12:05.943 }' 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.943 17:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.512 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.512 17:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.512 [2024-11-20 17:46:33.616628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.451 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.451 "name": "raid_bdev1", 00:12:07.451 "uuid": "d6d57a42-0d5a-44e2-81ac-85d11ea9a100", 00:12:07.451 "strip_size_kb": 64, 00:12:07.451 "state": "online", 00:12:07.451 "raid_level": "concat", 00:12:07.451 "superblock": true, 00:12:07.451 "num_base_bdevs": 4, 00:12:07.451 "num_base_bdevs_discovered": 4, 00:12:07.451 "num_base_bdevs_operational": 4, 00:12:07.451 "base_bdevs_list": [ 00:12:07.451 { 00:12:07.451 "name": "BaseBdev1", 00:12:07.451 "uuid": "c5ab31b3-cc3f-5203-9dd5-43ddd9ae363d", 00:12:07.451 "is_configured": true, 00:12:07.451 "data_offset": 2048, 00:12:07.451 "data_size": 63488 00:12:07.451 }, 00:12:07.451 { 00:12:07.451 "name": "BaseBdev2", 00:12:07.451 "uuid": "3a0bc7a0-7afe-59e7-9dc0-def5c786ae04", 00:12:07.451 "is_configured": true, 00:12:07.451 "data_offset": 2048, 00:12:07.451 "data_size": 63488 00:12:07.451 }, 00:12:07.451 { 00:12:07.451 "name": "BaseBdev3", 00:12:07.451 "uuid": "e9b1c44f-c769-551d-a401-ac813e4a9c4b", 00:12:07.451 "is_configured": true, 00:12:07.451 "data_offset": 2048, 00:12:07.451 "data_size": 63488 00:12:07.451 }, 00:12:07.451 { 00:12:07.451 "name": "BaseBdev4", 00:12:07.451 "uuid": "14398375-b9b5-505b-88d2-06b4d1040f0f", 00:12:07.452 "is_configured": true, 00:12:07.452 "data_offset": 2048, 00:12:07.452 "data_size": 63488 00:12:07.452 } 00:12:07.452 ] 00:12:07.452 }' 00:12:07.452 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.452 17:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.065 17:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.065 [2024-11-20 17:46:35.007231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.065 [2024-11-20 17:46:35.007287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.065 [2024-11-20 17:46:35.010134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.065 [2024-11-20 17:46:35.010209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.065 [2024-11-20 17:46:35.010259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.065 [2024-11-20 17:46:35.010273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:08.065 { 00:12:08.065 "results": [ 00:12:08.065 { 00:12:08.065 "job": "raid_bdev1", 00:12:08.065 "core_mask": "0x1", 00:12:08.065 "workload": "randrw", 00:12:08.065 "percentage": 50, 00:12:08.065 "status": "finished", 00:12:08.065 "queue_depth": 1, 00:12:08.065 "io_size": 131072, 00:12:08.065 "runtime": 1.390942, 00:12:08.065 "iops": 12347.028129138382, 00:12:08.065 "mibps": 1543.3785161422977, 00:12:08.065 "io_failed": 1, 00:12:08.065 "io_timeout": 0, 00:12:08.065 "avg_latency_us": 113.79713633734418, 00:12:08.065 "min_latency_us": 28.28296943231441, 00:12:08.065 "max_latency_us": 1652.709170305677 00:12:08.065 } 00:12:08.065 ], 00:12:08.065 "core_count": 1 00:12:08.065 } 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73324 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73324 ']' 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73324 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73324 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.065 killing process with pid 73324 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73324' 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73324 00:12:08.065 [2024-11-20 17:46:35.056195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.065 17:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73324 00:12:08.324 [2024-11-20 17:46:35.460082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ALPE2it7hf 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:09.702 00:12:09.702 real 0m5.032s 00:12:09.702 user 0m5.836s 00:12:09.702 sys 0m0.687s 00:12:09.702 ************************************ 00:12:09.702 END TEST raid_read_error_test 00:12:09.702 ************************************ 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.702 17:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.702 17:46:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:09.702 17:46:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:09.702 17:46:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.702 17:46:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.702 ************************************ 00:12:09.702 START TEST raid_write_error_test 00:12:09.702 ************************************ 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:09.702 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oGUtrtEoFP 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73474 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73474 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73474 ']' 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.963 17:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.963 [2024-11-20 17:46:36.974152] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:09.963 [2024-11-20 17:46:36.974352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73474 ] 00:12:10.223 [2024-11-20 17:46:37.148578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.223 [2024-11-20 17:46:37.263480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.482 [2024-11-20 17:46:37.462457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.482 [2024-11-20 17:46:37.462601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.742 BaseBdev1_malloc 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.742 true 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.742 [2024-11-20 17:46:37.878886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.742 [2024-11-20 17:46:37.878945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.742 [2024-11-20 17:46:37.878973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.742 [2024-11-20 17:46:37.878987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.742 [2024-11-20 17:46:37.881208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.742 [2024-11-20 17:46:37.881253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.742 BaseBdev1 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.742 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.003 BaseBdev2_malloc 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.003 true 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.003 [2024-11-20 17:46:37.970756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:11.003 [2024-11-20 17:46:37.970925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.003 [2024-11-20 17:46:37.970948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:11.003 [2024-11-20 17:46:37.970961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.003 [2024-11-20 17:46:37.973650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.003 [2024-11-20 17:46:37.973696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:11.003 BaseBdev2 00:12:11.003 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.004 17:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:11.004 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 BaseBdev3_malloc 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 true 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 [2024-11-20 17:46:38.060813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:11.004 [2024-11-20 17:46:38.060891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.004 [2024-11-20 17:46:38.060916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:11.004 [2024-11-20 17:46:38.060930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.004 [2024-11-20 17:46:38.063648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.004 [2024-11-20 17:46:38.063765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:11.004 BaseBdev3 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 BaseBdev4_malloc 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 true 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 [2024-11-20 17:46:38.139694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:11.004 [2024-11-20 17:46:38.139772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.004 [2024-11-20 17:46:38.139796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:11.004 [2024-11-20 17:46:38.139808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.004 [2024-11-20 17:46:38.142492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.004 [2024-11-20 17:46:38.142538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:11.004 BaseBdev4 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 [2024-11-20 17:46:38.151782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.004 [2024-11-20 17:46:38.154218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.004 [2024-11-20 17:46:38.154329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.004 [2024-11-20 17:46:38.154396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.004 [2024-11-20 17:46:38.154647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:11.004 [2024-11-20 17:46:38.154661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.004 [2024-11-20 17:46:38.154942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:11.004 [2024-11-20 17:46:38.155154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:11.004 [2024-11-20 17:46:38.155167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:11.004 [2024-11-20 17:46:38.155354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.004 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.264 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.264 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.264 "name": "raid_bdev1", 00:12:11.264 "uuid": "c21810ea-79ae-47a3-91b3-5b027aa2d5b5", 00:12:11.264 "strip_size_kb": 64, 00:12:11.264 "state": "online", 00:12:11.264 "raid_level": "concat", 00:12:11.264 "superblock": true, 00:12:11.264 "num_base_bdevs": 4, 00:12:11.264 "num_base_bdevs_discovered": 4, 00:12:11.264 "num_base_bdevs_operational": 4, 00:12:11.264 "base_bdevs_list": [ 00:12:11.264 { 00:12:11.264 "name": "BaseBdev1", 00:12:11.264 "uuid": "fb545645-919f-54da-b6c7-21f8389ae58f", 00:12:11.264 "is_configured": true, 00:12:11.264 "data_offset": 2048, 00:12:11.264 "data_size": 63488 00:12:11.265 }, 00:12:11.265 { 00:12:11.265 "name": "BaseBdev2", 00:12:11.265 "uuid": "b19856a0-0101-5d92-bed5-666049b0d26f", 00:12:11.265 "is_configured": true, 00:12:11.265 "data_offset": 2048, 00:12:11.265 "data_size": 63488 00:12:11.265 }, 00:12:11.265 { 00:12:11.265 "name": "BaseBdev3", 00:12:11.265 "uuid": "b48a055a-ac68-53a1-ab4c-269f7e5376f1", 00:12:11.265 "is_configured": true, 00:12:11.265 "data_offset": 2048, 00:12:11.265 "data_size": 63488 00:12:11.265 }, 00:12:11.265 { 00:12:11.265 "name": "BaseBdev4", 00:12:11.265 "uuid": "6729663d-87b5-5fda-b575-4f63855469c7", 00:12:11.265 "is_configured": true, 00:12:11.265 "data_offset": 2048, 00:12:11.265 "data_size": 63488 00:12:11.265 } 00:12:11.265 ] 00:12:11.265 }' 00:12:11.265 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.265 17:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.524 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:11.524 17:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.524 [2024-11-20 17:46:38.664490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.461 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.721 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.722 "name": "raid_bdev1", 00:12:12.722 "uuid": "c21810ea-79ae-47a3-91b3-5b027aa2d5b5", 00:12:12.722 "strip_size_kb": 64, 00:12:12.722 "state": "online", 00:12:12.722 "raid_level": "concat", 00:12:12.722 "superblock": true, 00:12:12.722 "num_base_bdevs": 4, 00:12:12.722 "num_base_bdevs_discovered": 4, 00:12:12.722 "num_base_bdevs_operational": 4, 00:12:12.722 "base_bdevs_list": [ 00:12:12.722 { 00:12:12.722 "name": "BaseBdev1", 00:12:12.722 "uuid": "fb545645-919f-54da-b6c7-21f8389ae58f", 00:12:12.722 "is_configured": true, 00:12:12.722 "data_offset": 2048, 00:12:12.722 "data_size": 63488 00:12:12.722 }, 00:12:12.722 { 00:12:12.722 "name": "BaseBdev2", 00:12:12.722 "uuid": "b19856a0-0101-5d92-bed5-666049b0d26f", 00:12:12.722 "is_configured": true, 00:12:12.722 "data_offset": 2048, 00:12:12.722 "data_size": 63488 00:12:12.722 }, 00:12:12.722 { 00:12:12.722 "name": "BaseBdev3", 00:12:12.722 "uuid": "b48a055a-ac68-53a1-ab4c-269f7e5376f1", 00:12:12.722 "is_configured": true, 00:12:12.722 "data_offset": 2048, 00:12:12.722 "data_size": 63488 00:12:12.722 }, 00:12:12.722 { 00:12:12.722 "name": "BaseBdev4", 00:12:12.722 "uuid": "6729663d-87b5-5fda-b575-4f63855469c7", 00:12:12.722 "is_configured": true, 00:12:12.722 "data_offset": 2048, 00:12:12.722 "data_size": 63488 00:12:12.722 } 00:12:12.722 ] 00:12:12.722 }' 00:12:12.722 17:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.722 17:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.981 [2024-11-20 17:46:40.106897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.981 [2024-11-20 17:46:40.106956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.981 [2024-11-20 17:46:40.110073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.981 [2024-11-20 17:46:40.110149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.981 [2024-11-20 17:46:40.110207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.981 [2024-11-20 17:46:40.110223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:12.981 { 00:12:12.981 "results": [ 00:12:12.981 { 00:12:12.981 "job": "raid_bdev1", 00:12:12.981 "core_mask": "0x1", 00:12:12.981 "workload": "randrw", 00:12:12.981 "percentage": 50, 00:12:12.981 "status": "finished", 00:12:12.981 "queue_depth": 1, 00:12:12.981 "io_size": 131072, 00:12:12.981 "runtime": 1.442719, 00:12:12.981 "iops": 12723.198349782599, 00:12:12.981 "mibps": 1590.3997937228248, 00:12:12.981 "io_failed": 1, 00:12:12.981 "io_timeout": 0, 00:12:12.981 "avg_latency_us": 110.35894373432502, 00:12:12.981 "min_latency_us": 27.612227074235808, 00:12:12.981 "max_latency_us": 1473.844541484716 00:12:12.981 } 00:12:12.981 ], 00:12:12.981 "core_count": 1 00:12:12.981 } 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73474 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73474 ']' 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73474 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73474 00:12:12.981 killing process with pid 73474 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73474' 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73474 00:12:12.981 [2024-11-20 17:46:40.146703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.981 17:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73474 00:12:13.550 [2024-11-20 17:46:40.542191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oGUtrtEoFP 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:14.932 00:12:14.932 real 0m5.059s 00:12:14.932 user 0m5.932s 00:12:14.932 sys 0m0.593s 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.932 ************************************ 00:12:14.932 END TEST raid_write_error_test 00:12:14.932 ************************************ 00:12:14.932 17:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.932 17:46:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:14.932 17:46:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:14.932 17:46:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:14.932 17:46:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.932 17:46:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.932 ************************************ 00:12:14.932 START TEST raid_state_function_test 00:12:14.932 ************************************ 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:14.932 Process raid pid: 73619 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73619 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73619' 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73619 00:12:14.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73619 ']' 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.932 17:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.932 [2024-11-20 17:46:42.070199] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:14.932 [2024-11-20 17:46:42.070413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.191 [2024-11-20 17:46:42.229297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.450 [2024-11-20 17:46:42.373404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.710 [2024-11-20 17:46:42.634492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.710 [2024-11-20 17:46:42.634679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.969 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.970 [2024-11-20 17:46:42.934719] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.970 [2024-11-20 17:46:42.934798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.970 [2024-11-20 17:46:42.934816] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.970 [2024-11-20 17:46:42.934827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.970 [2024-11-20 17:46:42.934834] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.970 [2024-11-20 17:46:42.934844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.970 [2024-11-20 17:46:42.934850] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:15.970 [2024-11-20 17:46:42.934859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.970 "name": "Existed_Raid", 00:12:15.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.970 "strip_size_kb": 0, 00:12:15.970 "state": "configuring", 00:12:15.970 "raid_level": "raid1", 00:12:15.970 "superblock": false, 00:12:15.970 "num_base_bdevs": 4, 00:12:15.970 "num_base_bdevs_discovered": 0, 00:12:15.970 "num_base_bdevs_operational": 4, 00:12:15.970 "base_bdevs_list": [ 00:12:15.970 { 00:12:15.970 "name": "BaseBdev1", 00:12:15.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.970 "is_configured": false, 00:12:15.970 "data_offset": 0, 00:12:15.970 "data_size": 0 00:12:15.970 }, 00:12:15.970 { 00:12:15.970 "name": "BaseBdev2", 00:12:15.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.970 "is_configured": false, 00:12:15.970 "data_offset": 0, 00:12:15.970 "data_size": 0 00:12:15.970 }, 00:12:15.970 { 00:12:15.970 "name": "BaseBdev3", 00:12:15.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.970 "is_configured": false, 00:12:15.970 "data_offset": 0, 00:12:15.970 "data_size": 0 00:12:15.970 }, 00:12:15.970 { 00:12:15.970 "name": "BaseBdev4", 00:12:15.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.970 "is_configured": false, 00:12:15.970 "data_offset": 0, 00:12:15.970 "data_size": 0 00:12:15.970 } 00:12:15.970 ] 00:12:15.970 }' 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.970 17:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.229 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.229 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.230 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.230 [2024-11-20 17:46:43.401998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.230 [2024-11-20 17:46:43.402160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 [2024-11-20 17:46:43.413941] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.490 [2024-11-20 17:46:43.414071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.490 [2024-11-20 17:46:43.414107] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.490 [2024-11-20 17:46:43.414132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.490 [2024-11-20 17:46:43.414152] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.490 [2024-11-20 17:46:43.414175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.490 [2024-11-20 17:46:43.414209] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:16.490 [2024-11-20 17:46:43.414233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 [2024-11-20 17:46:43.468178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.490 BaseBdev1 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 [ 00:12:16.490 { 00:12:16.490 "name": "BaseBdev1", 00:12:16.490 "aliases": [ 00:12:16.490 "c1128b25-cff5-4988-85c3-8d9b7fe09f24" 00:12:16.490 ], 00:12:16.490 "product_name": "Malloc disk", 00:12:16.490 "block_size": 512, 00:12:16.490 "num_blocks": 65536, 00:12:16.490 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:16.490 "assigned_rate_limits": { 00:12:16.490 "rw_ios_per_sec": 0, 00:12:16.490 "rw_mbytes_per_sec": 0, 00:12:16.490 "r_mbytes_per_sec": 0, 00:12:16.490 "w_mbytes_per_sec": 0 00:12:16.490 }, 00:12:16.490 "claimed": true, 00:12:16.490 "claim_type": "exclusive_write", 00:12:16.490 "zoned": false, 00:12:16.490 "supported_io_types": { 00:12:16.490 "read": true, 00:12:16.490 "write": true, 00:12:16.490 "unmap": true, 00:12:16.490 "flush": true, 00:12:16.490 "reset": true, 00:12:16.490 "nvme_admin": false, 00:12:16.490 "nvme_io": false, 00:12:16.490 "nvme_io_md": false, 00:12:16.490 "write_zeroes": true, 00:12:16.490 "zcopy": true, 00:12:16.490 "get_zone_info": false, 00:12:16.490 "zone_management": false, 00:12:16.490 "zone_append": false, 00:12:16.490 "compare": false, 00:12:16.490 "compare_and_write": false, 00:12:16.490 "abort": true, 00:12:16.490 "seek_hole": false, 00:12:16.490 "seek_data": false, 00:12:16.490 "copy": true, 00:12:16.490 "nvme_iov_md": false 00:12:16.490 }, 00:12:16.490 "memory_domains": [ 00:12:16.490 { 00:12:16.490 "dma_device_id": "system", 00:12:16.490 "dma_device_type": 1 00:12:16.490 }, 00:12:16.490 { 00:12:16.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.490 "dma_device_type": 2 00:12:16.490 } 00:12:16.490 ], 00:12:16.490 "driver_specific": {} 00:12:16.490 } 00:12:16.490 ] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.490 "name": "Existed_Raid", 00:12:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.490 "strip_size_kb": 0, 00:12:16.490 "state": "configuring", 00:12:16.490 "raid_level": "raid1", 00:12:16.490 "superblock": false, 00:12:16.490 "num_base_bdevs": 4, 00:12:16.490 "num_base_bdevs_discovered": 1, 00:12:16.490 "num_base_bdevs_operational": 4, 00:12:16.490 "base_bdevs_list": [ 00:12:16.490 { 00:12:16.490 "name": "BaseBdev1", 00:12:16.490 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:16.490 "is_configured": true, 00:12:16.490 "data_offset": 0, 00:12:16.490 "data_size": 65536 00:12:16.490 }, 00:12:16.490 { 00:12:16.490 "name": "BaseBdev2", 00:12:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.490 "is_configured": false, 00:12:16.490 "data_offset": 0, 00:12:16.490 "data_size": 0 00:12:16.490 }, 00:12:16.490 { 00:12:16.490 "name": "BaseBdev3", 00:12:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.490 "is_configured": false, 00:12:16.490 "data_offset": 0, 00:12:16.490 "data_size": 0 00:12:16.490 }, 00:12:16.490 { 00:12:16.490 "name": "BaseBdev4", 00:12:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.490 "is_configured": false, 00:12:16.490 "data_offset": 0, 00:12:16.490 "data_size": 0 00:12:16.490 } 00:12:16.490 ] 00:12:16.490 }' 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.490 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.750 [2024-11-20 17:46:43.915531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.750 [2024-11-20 17:46:43.915617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.750 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.009 [2024-11-20 17:46:43.927518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.009 [2024-11-20 17:46:43.929646] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.009 [2024-11-20 17:46:43.929694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.009 [2024-11-20 17:46:43.929706] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.009 [2024-11-20 17:46:43.929716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.009 [2024-11-20 17:46:43.929723] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.009 [2024-11-20 17:46:43.929731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.009 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.010 "name": "Existed_Raid", 00:12:17.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.010 "strip_size_kb": 0, 00:12:17.010 "state": "configuring", 00:12:17.010 "raid_level": "raid1", 00:12:17.010 "superblock": false, 00:12:17.010 "num_base_bdevs": 4, 00:12:17.010 "num_base_bdevs_discovered": 1, 00:12:17.010 "num_base_bdevs_operational": 4, 00:12:17.010 "base_bdevs_list": [ 00:12:17.010 { 00:12:17.010 "name": "BaseBdev1", 00:12:17.010 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:17.010 "is_configured": true, 00:12:17.010 "data_offset": 0, 00:12:17.010 "data_size": 65536 00:12:17.010 }, 00:12:17.010 { 00:12:17.010 "name": "BaseBdev2", 00:12:17.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.010 "is_configured": false, 00:12:17.010 "data_offset": 0, 00:12:17.010 "data_size": 0 00:12:17.010 }, 00:12:17.010 { 00:12:17.010 "name": "BaseBdev3", 00:12:17.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.010 "is_configured": false, 00:12:17.010 "data_offset": 0, 00:12:17.010 "data_size": 0 00:12:17.010 }, 00:12:17.010 { 00:12:17.010 "name": "BaseBdev4", 00:12:17.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.010 "is_configured": false, 00:12:17.010 "data_offset": 0, 00:12:17.010 "data_size": 0 00:12:17.010 } 00:12:17.010 ] 00:12:17.010 }' 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.010 17:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.270 [2024-11-20 17:46:44.439504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.270 BaseBdev2 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.270 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.530 [ 00:12:17.530 { 00:12:17.530 "name": "BaseBdev2", 00:12:17.530 "aliases": [ 00:12:17.530 "917ede4b-8818-441c-abfd-a90918be5661" 00:12:17.530 ], 00:12:17.530 "product_name": "Malloc disk", 00:12:17.530 "block_size": 512, 00:12:17.530 "num_blocks": 65536, 00:12:17.530 "uuid": "917ede4b-8818-441c-abfd-a90918be5661", 00:12:17.530 "assigned_rate_limits": { 00:12:17.530 "rw_ios_per_sec": 0, 00:12:17.530 "rw_mbytes_per_sec": 0, 00:12:17.530 "r_mbytes_per_sec": 0, 00:12:17.530 "w_mbytes_per_sec": 0 00:12:17.530 }, 00:12:17.530 "claimed": true, 00:12:17.530 "claim_type": "exclusive_write", 00:12:17.530 "zoned": false, 00:12:17.530 "supported_io_types": { 00:12:17.530 "read": true, 00:12:17.530 "write": true, 00:12:17.530 "unmap": true, 00:12:17.530 "flush": true, 00:12:17.530 "reset": true, 00:12:17.530 "nvme_admin": false, 00:12:17.530 "nvme_io": false, 00:12:17.530 "nvme_io_md": false, 00:12:17.530 "write_zeroes": true, 00:12:17.530 "zcopy": true, 00:12:17.530 "get_zone_info": false, 00:12:17.530 "zone_management": false, 00:12:17.530 "zone_append": false, 00:12:17.530 "compare": false, 00:12:17.530 "compare_and_write": false, 00:12:17.530 "abort": true, 00:12:17.530 "seek_hole": false, 00:12:17.530 "seek_data": false, 00:12:17.530 "copy": true, 00:12:17.530 "nvme_iov_md": false 00:12:17.530 }, 00:12:17.530 "memory_domains": [ 00:12:17.530 { 00:12:17.530 "dma_device_id": "system", 00:12:17.530 "dma_device_type": 1 00:12:17.530 }, 00:12:17.530 { 00:12:17.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.530 "dma_device_type": 2 00:12:17.530 } 00:12:17.530 ], 00:12:17.530 "driver_specific": {} 00:12:17.530 } 00:12:17.530 ] 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.530 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.531 "name": "Existed_Raid", 00:12:17.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.531 "strip_size_kb": 0, 00:12:17.531 "state": "configuring", 00:12:17.531 "raid_level": "raid1", 00:12:17.531 "superblock": false, 00:12:17.531 "num_base_bdevs": 4, 00:12:17.531 "num_base_bdevs_discovered": 2, 00:12:17.531 "num_base_bdevs_operational": 4, 00:12:17.531 "base_bdevs_list": [ 00:12:17.531 { 00:12:17.531 "name": "BaseBdev1", 00:12:17.531 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:17.531 "is_configured": true, 00:12:17.531 "data_offset": 0, 00:12:17.531 "data_size": 65536 00:12:17.531 }, 00:12:17.531 { 00:12:17.531 "name": "BaseBdev2", 00:12:17.531 "uuid": "917ede4b-8818-441c-abfd-a90918be5661", 00:12:17.531 "is_configured": true, 00:12:17.531 "data_offset": 0, 00:12:17.531 "data_size": 65536 00:12:17.531 }, 00:12:17.531 { 00:12:17.531 "name": "BaseBdev3", 00:12:17.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.531 "is_configured": false, 00:12:17.531 "data_offset": 0, 00:12:17.531 "data_size": 0 00:12:17.531 }, 00:12:17.531 { 00:12:17.531 "name": "BaseBdev4", 00:12:17.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.531 "is_configured": false, 00:12:17.531 "data_offset": 0, 00:12:17.531 "data_size": 0 00:12:17.531 } 00:12:17.531 ] 00:12:17.531 }' 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.531 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.791 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.791 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.791 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.051 [2024-11-20 17:46:44.982881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.051 BaseBdev3 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.051 17:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.051 [ 00:12:18.051 { 00:12:18.051 "name": "BaseBdev3", 00:12:18.051 "aliases": [ 00:12:18.051 "30ba3b75-c494-4e2f-bbba-23cea15b71d4" 00:12:18.051 ], 00:12:18.051 "product_name": "Malloc disk", 00:12:18.051 "block_size": 512, 00:12:18.051 "num_blocks": 65536, 00:12:18.051 "uuid": "30ba3b75-c494-4e2f-bbba-23cea15b71d4", 00:12:18.051 "assigned_rate_limits": { 00:12:18.051 "rw_ios_per_sec": 0, 00:12:18.051 "rw_mbytes_per_sec": 0, 00:12:18.051 "r_mbytes_per_sec": 0, 00:12:18.051 "w_mbytes_per_sec": 0 00:12:18.051 }, 00:12:18.051 "claimed": true, 00:12:18.051 "claim_type": "exclusive_write", 00:12:18.051 "zoned": false, 00:12:18.051 "supported_io_types": { 00:12:18.051 "read": true, 00:12:18.051 "write": true, 00:12:18.051 "unmap": true, 00:12:18.051 "flush": true, 00:12:18.051 "reset": true, 00:12:18.051 "nvme_admin": false, 00:12:18.051 "nvme_io": false, 00:12:18.051 "nvme_io_md": false, 00:12:18.051 "write_zeroes": true, 00:12:18.051 "zcopy": true, 00:12:18.051 "get_zone_info": false, 00:12:18.051 "zone_management": false, 00:12:18.051 "zone_append": false, 00:12:18.051 "compare": false, 00:12:18.051 "compare_and_write": false, 00:12:18.051 "abort": true, 00:12:18.051 "seek_hole": false, 00:12:18.051 "seek_data": false, 00:12:18.051 "copy": true, 00:12:18.051 "nvme_iov_md": false 00:12:18.051 }, 00:12:18.051 "memory_domains": [ 00:12:18.051 { 00:12:18.051 "dma_device_id": "system", 00:12:18.051 "dma_device_type": 1 00:12:18.051 }, 00:12:18.051 { 00:12:18.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.051 "dma_device_type": 2 00:12:18.051 } 00:12:18.051 ], 00:12:18.051 "driver_specific": {} 00:12:18.051 } 00:12:18.051 ] 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.051 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.052 "name": "Existed_Raid", 00:12:18.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.052 "strip_size_kb": 0, 00:12:18.052 "state": "configuring", 00:12:18.052 "raid_level": "raid1", 00:12:18.052 "superblock": false, 00:12:18.052 "num_base_bdevs": 4, 00:12:18.052 "num_base_bdevs_discovered": 3, 00:12:18.052 "num_base_bdevs_operational": 4, 00:12:18.052 "base_bdevs_list": [ 00:12:18.052 { 00:12:18.052 "name": "BaseBdev1", 00:12:18.052 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:18.052 "is_configured": true, 00:12:18.052 "data_offset": 0, 00:12:18.052 "data_size": 65536 00:12:18.052 }, 00:12:18.052 { 00:12:18.052 "name": "BaseBdev2", 00:12:18.052 "uuid": "917ede4b-8818-441c-abfd-a90918be5661", 00:12:18.052 "is_configured": true, 00:12:18.052 "data_offset": 0, 00:12:18.052 "data_size": 65536 00:12:18.052 }, 00:12:18.052 { 00:12:18.052 "name": "BaseBdev3", 00:12:18.052 "uuid": "30ba3b75-c494-4e2f-bbba-23cea15b71d4", 00:12:18.052 "is_configured": true, 00:12:18.052 "data_offset": 0, 00:12:18.052 "data_size": 65536 00:12:18.052 }, 00:12:18.052 { 00:12:18.052 "name": "BaseBdev4", 00:12:18.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.052 "is_configured": false, 00:12:18.052 "data_offset": 0, 00:12:18.052 "data_size": 0 00:12:18.052 } 00:12:18.052 ] 00:12:18.052 }' 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.052 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:18.311 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.311 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.572 [2024-11-20 17:46:45.486399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.572 [2024-11-20 17:46:45.486463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:18.572 [2024-11-20 17:46:45.486472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:18.572 [2024-11-20 17:46:45.486780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:18.572 [2024-11-20 17:46:45.486987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:18.572 [2024-11-20 17:46:45.487008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:18.572 [2024-11-20 17:46:45.487302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.572 BaseBdev4 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.572 [ 00:12:18.572 { 00:12:18.572 "name": "BaseBdev4", 00:12:18.572 "aliases": [ 00:12:18.572 "603e2cd8-344a-44c5-8625-bc7deeb07d49" 00:12:18.572 ], 00:12:18.572 "product_name": "Malloc disk", 00:12:18.572 "block_size": 512, 00:12:18.572 "num_blocks": 65536, 00:12:18.572 "uuid": "603e2cd8-344a-44c5-8625-bc7deeb07d49", 00:12:18.572 "assigned_rate_limits": { 00:12:18.572 "rw_ios_per_sec": 0, 00:12:18.572 "rw_mbytes_per_sec": 0, 00:12:18.572 "r_mbytes_per_sec": 0, 00:12:18.572 "w_mbytes_per_sec": 0 00:12:18.572 }, 00:12:18.572 "claimed": true, 00:12:18.572 "claim_type": "exclusive_write", 00:12:18.572 "zoned": false, 00:12:18.572 "supported_io_types": { 00:12:18.572 "read": true, 00:12:18.572 "write": true, 00:12:18.572 "unmap": true, 00:12:18.572 "flush": true, 00:12:18.572 "reset": true, 00:12:18.572 "nvme_admin": false, 00:12:18.572 "nvme_io": false, 00:12:18.572 "nvme_io_md": false, 00:12:18.572 "write_zeroes": true, 00:12:18.572 "zcopy": true, 00:12:18.572 "get_zone_info": false, 00:12:18.572 "zone_management": false, 00:12:18.572 "zone_append": false, 00:12:18.572 "compare": false, 00:12:18.572 "compare_and_write": false, 00:12:18.572 "abort": true, 00:12:18.572 "seek_hole": false, 00:12:18.572 "seek_data": false, 00:12:18.572 "copy": true, 00:12:18.572 "nvme_iov_md": false 00:12:18.572 }, 00:12:18.572 "memory_domains": [ 00:12:18.572 { 00:12:18.572 "dma_device_id": "system", 00:12:18.572 "dma_device_type": 1 00:12:18.572 }, 00:12:18.572 { 00:12:18.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.572 "dma_device_type": 2 00:12:18.572 } 00:12:18.572 ], 00:12:18.572 "driver_specific": {} 00:12:18.572 } 00:12:18.572 ] 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.572 "name": "Existed_Raid", 00:12:18.572 "uuid": "d445cbd2-e3e0-456b-ba3f-3c247aa8c1d2", 00:12:18.572 "strip_size_kb": 0, 00:12:18.572 "state": "online", 00:12:18.572 "raid_level": "raid1", 00:12:18.572 "superblock": false, 00:12:18.572 "num_base_bdevs": 4, 00:12:18.572 "num_base_bdevs_discovered": 4, 00:12:18.572 "num_base_bdevs_operational": 4, 00:12:18.572 "base_bdevs_list": [ 00:12:18.572 { 00:12:18.572 "name": "BaseBdev1", 00:12:18.572 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:18.572 "is_configured": true, 00:12:18.572 "data_offset": 0, 00:12:18.572 "data_size": 65536 00:12:18.572 }, 00:12:18.572 { 00:12:18.572 "name": "BaseBdev2", 00:12:18.572 "uuid": "917ede4b-8818-441c-abfd-a90918be5661", 00:12:18.572 "is_configured": true, 00:12:18.572 "data_offset": 0, 00:12:18.572 "data_size": 65536 00:12:18.572 }, 00:12:18.572 { 00:12:18.572 "name": "BaseBdev3", 00:12:18.572 "uuid": "30ba3b75-c494-4e2f-bbba-23cea15b71d4", 00:12:18.572 "is_configured": true, 00:12:18.572 "data_offset": 0, 00:12:18.572 "data_size": 65536 00:12:18.572 }, 00:12:18.572 { 00:12:18.572 "name": "BaseBdev4", 00:12:18.572 "uuid": "603e2cd8-344a-44c5-8625-bc7deeb07d49", 00:12:18.572 "is_configured": true, 00:12:18.572 "data_offset": 0, 00:12:18.572 "data_size": 65536 00:12:18.572 } 00:12:18.572 ] 00:12:18.572 }' 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.572 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.832 17:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.832 [2024-11-20 17:46:45.978103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.832 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.092 "name": "Existed_Raid", 00:12:19.092 "aliases": [ 00:12:19.092 "d445cbd2-e3e0-456b-ba3f-3c247aa8c1d2" 00:12:19.092 ], 00:12:19.092 "product_name": "Raid Volume", 00:12:19.092 "block_size": 512, 00:12:19.092 "num_blocks": 65536, 00:12:19.092 "uuid": "d445cbd2-e3e0-456b-ba3f-3c247aa8c1d2", 00:12:19.092 "assigned_rate_limits": { 00:12:19.092 "rw_ios_per_sec": 0, 00:12:19.092 "rw_mbytes_per_sec": 0, 00:12:19.092 "r_mbytes_per_sec": 0, 00:12:19.092 "w_mbytes_per_sec": 0 00:12:19.092 }, 00:12:19.092 "claimed": false, 00:12:19.092 "zoned": false, 00:12:19.092 "supported_io_types": { 00:12:19.092 "read": true, 00:12:19.092 "write": true, 00:12:19.092 "unmap": false, 00:12:19.092 "flush": false, 00:12:19.092 "reset": true, 00:12:19.092 "nvme_admin": false, 00:12:19.092 "nvme_io": false, 00:12:19.092 "nvme_io_md": false, 00:12:19.092 "write_zeroes": true, 00:12:19.092 "zcopy": false, 00:12:19.092 "get_zone_info": false, 00:12:19.092 "zone_management": false, 00:12:19.092 "zone_append": false, 00:12:19.092 "compare": false, 00:12:19.092 "compare_and_write": false, 00:12:19.092 "abort": false, 00:12:19.092 "seek_hole": false, 00:12:19.092 "seek_data": false, 00:12:19.092 "copy": false, 00:12:19.092 "nvme_iov_md": false 00:12:19.092 }, 00:12:19.092 "memory_domains": [ 00:12:19.092 { 00:12:19.092 "dma_device_id": "system", 00:12:19.092 "dma_device_type": 1 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.092 "dma_device_type": 2 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "system", 00:12:19.092 "dma_device_type": 1 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.092 "dma_device_type": 2 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "system", 00:12:19.092 "dma_device_type": 1 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.092 "dma_device_type": 2 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "system", 00:12:19.092 "dma_device_type": 1 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.092 "dma_device_type": 2 00:12:19.092 } 00:12:19.092 ], 00:12:19.092 "driver_specific": { 00:12:19.092 "raid": { 00:12:19.092 "uuid": "d445cbd2-e3e0-456b-ba3f-3c247aa8c1d2", 00:12:19.092 "strip_size_kb": 0, 00:12:19.092 "state": "online", 00:12:19.092 "raid_level": "raid1", 00:12:19.092 "superblock": false, 00:12:19.092 "num_base_bdevs": 4, 00:12:19.092 "num_base_bdevs_discovered": 4, 00:12:19.092 "num_base_bdevs_operational": 4, 00:12:19.092 "base_bdevs_list": [ 00:12:19.092 { 00:12:19.092 "name": "BaseBdev1", 00:12:19.092 "uuid": "c1128b25-cff5-4988-85c3-8d9b7fe09f24", 00:12:19.092 "is_configured": true, 00:12:19.092 "data_offset": 0, 00:12:19.092 "data_size": 65536 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "name": "BaseBdev2", 00:12:19.092 "uuid": "917ede4b-8818-441c-abfd-a90918be5661", 00:12:19.092 "is_configured": true, 00:12:19.092 "data_offset": 0, 00:12:19.092 "data_size": 65536 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "name": "BaseBdev3", 00:12:19.092 "uuid": "30ba3b75-c494-4e2f-bbba-23cea15b71d4", 00:12:19.092 "is_configured": true, 00:12:19.092 "data_offset": 0, 00:12:19.092 "data_size": 65536 00:12:19.092 }, 00:12:19.092 { 00:12:19.092 "name": "BaseBdev4", 00:12:19.092 "uuid": "603e2cd8-344a-44c5-8625-bc7deeb07d49", 00:12:19.092 "is_configured": true, 00:12:19.092 "data_offset": 0, 00:12:19.092 "data_size": 65536 00:12:19.092 } 00:12:19.092 ] 00:12:19.092 } 00:12:19.092 } 00:12:19.092 }' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:19.092 BaseBdev2 00:12:19.092 BaseBdev3 00:12:19.092 BaseBdev4' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.092 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.093 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.352 [2024-11-20 17:46:46.285231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.352 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.353 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.353 "name": "Existed_Raid", 00:12:19.353 "uuid": "d445cbd2-e3e0-456b-ba3f-3c247aa8c1d2", 00:12:19.353 "strip_size_kb": 0, 00:12:19.353 "state": "online", 00:12:19.353 "raid_level": "raid1", 00:12:19.353 "superblock": false, 00:12:19.353 "num_base_bdevs": 4, 00:12:19.353 "num_base_bdevs_discovered": 3, 00:12:19.353 "num_base_bdevs_operational": 3, 00:12:19.353 "base_bdevs_list": [ 00:12:19.353 { 00:12:19.353 "name": null, 00:12:19.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.353 "is_configured": false, 00:12:19.353 "data_offset": 0, 00:12:19.353 "data_size": 65536 00:12:19.353 }, 00:12:19.353 { 00:12:19.353 "name": "BaseBdev2", 00:12:19.353 "uuid": "917ede4b-8818-441c-abfd-a90918be5661", 00:12:19.353 "is_configured": true, 00:12:19.353 "data_offset": 0, 00:12:19.353 "data_size": 65536 00:12:19.353 }, 00:12:19.353 { 00:12:19.353 "name": "BaseBdev3", 00:12:19.353 "uuid": "30ba3b75-c494-4e2f-bbba-23cea15b71d4", 00:12:19.353 "is_configured": true, 00:12:19.353 "data_offset": 0, 00:12:19.353 "data_size": 65536 00:12:19.353 }, 00:12:19.353 { 00:12:19.353 "name": "BaseBdev4", 00:12:19.353 "uuid": "603e2cd8-344a-44c5-8625-bc7deeb07d49", 00:12:19.353 "is_configured": true, 00:12:19.353 "data_offset": 0, 00:12:19.353 "data_size": 65536 00:12:19.353 } 00:12:19.353 ] 00:12:19.353 }' 00:12:19.353 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.353 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.921 17:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.921 [2024-11-20 17:46:46.929565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.921 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.921 [2024-11-20 17:46:47.089535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.181 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.181 [2024-11-20 17:46:47.250361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:20.181 [2024-11-20 17:46:47.250482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.441 [2024-11-20 17:46:47.356941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.441 [2024-11-20 17:46:47.357007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.441 [2024-11-20 17:46:47.357041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.441 BaseBdev2 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.441 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.441 [ 00:12:20.441 { 00:12:20.441 "name": "BaseBdev2", 00:12:20.441 "aliases": [ 00:12:20.441 "1b9e2b03-0b40-4e57-b087-e8f08c627a77" 00:12:20.441 ], 00:12:20.441 "product_name": "Malloc disk", 00:12:20.441 "block_size": 512, 00:12:20.441 "num_blocks": 65536, 00:12:20.441 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:20.441 "assigned_rate_limits": { 00:12:20.441 "rw_ios_per_sec": 0, 00:12:20.441 "rw_mbytes_per_sec": 0, 00:12:20.441 "r_mbytes_per_sec": 0, 00:12:20.441 "w_mbytes_per_sec": 0 00:12:20.441 }, 00:12:20.441 "claimed": false, 00:12:20.441 "zoned": false, 00:12:20.441 "supported_io_types": { 00:12:20.441 "read": true, 00:12:20.441 "write": true, 00:12:20.441 "unmap": true, 00:12:20.441 "flush": true, 00:12:20.441 "reset": true, 00:12:20.441 "nvme_admin": false, 00:12:20.441 "nvme_io": false, 00:12:20.441 "nvme_io_md": false, 00:12:20.441 "write_zeroes": true, 00:12:20.441 "zcopy": true, 00:12:20.441 "get_zone_info": false, 00:12:20.441 "zone_management": false, 00:12:20.441 "zone_append": false, 00:12:20.441 "compare": false, 00:12:20.441 "compare_and_write": false, 00:12:20.441 "abort": true, 00:12:20.441 "seek_hole": false, 00:12:20.441 "seek_data": false, 00:12:20.441 "copy": true, 00:12:20.441 "nvme_iov_md": false 00:12:20.441 }, 00:12:20.441 "memory_domains": [ 00:12:20.441 { 00:12:20.441 "dma_device_id": "system", 00:12:20.441 "dma_device_type": 1 00:12:20.441 }, 00:12:20.441 { 00:12:20.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.441 "dma_device_type": 2 00:12:20.441 } 00:12:20.441 ], 00:12:20.441 "driver_specific": {} 00:12:20.441 } 00:12:20.441 ] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.442 BaseBdev3 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.442 [ 00:12:20.442 { 00:12:20.442 "name": "BaseBdev3", 00:12:20.442 "aliases": [ 00:12:20.442 "f579401c-0a6d-49b1-99b7-30c85da4e1ef" 00:12:20.442 ], 00:12:20.442 "product_name": "Malloc disk", 00:12:20.442 "block_size": 512, 00:12:20.442 "num_blocks": 65536, 00:12:20.442 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:20.442 "assigned_rate_limits": { 00:12:20.442 "rw_ios_per_sec": 0, 00:12:20.442 "rw_mbytes_per_sec": 0, 00:12:20.442 "r_mbytes_per_sec": 0, 00:12:20.442 "w_mbytes_per_sec": 0 00:12:20.442 }, 00:12:20.442 "claimed": false, 00:12:20.442 "zoned": false, 00:12:20.442 "supported_io_types": { 00:12:20.442 "read": true, 00:12:20.442 "write": true, 00:12:20.442 "unmap": true, 00:12:20.442 "flush": true, 00:12:20.442 "reset": true, 00:12:20.442 "nvme_admin": false, 00:12:20.442 "nvme_io": false, 00:12:20.442 "nvme_io_md": false, 00:12:20.442 "write_zeroes": true, 00:12:20.442 "zcopy": true, 00:12:20.442 "get_zone_info": false, 00:12:20.442 "zone_management": false, 00:12:20.442 "zone_append": false, 00:12:20.442 "compare": false, 00:12:20.442 "compare_and_write": false, 00:12:20.442 "abort": true, 00:12:20.442 "seek_hole": false, 00:12:20.442 "seek_data": false, 00:12:20.442 "copy": true, 00:12:20.442 "nvme_iov_md": false 00:12:20.442 }, 00:12:20.442 "memory_domains": [ 00:12:20.442 { 00:12:20.442 "dma_device_id": "system", 00:12:20.442 "dma_device_type": 1 00:12:20.442 }, 00:12:20.442 { 00:12:20.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.442 "dma_device_type": 2 00:12:20.442 } 00:12:20.442 ], 00:12:20.442 "driver_specific": {} 00:12:20.442 } 00:12:20.442 ] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.442 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.701 BaseBdev4 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.701 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.701 [ 00:12:20.701 { 00:12:20.701 "name": "BaseBdev4", 00:12:20.701 "aliases": [ 00:12:20.701 "4f7a22d1-8a85-483f-bb11-7c8932351260" 00:12:20.701 ], 00:12:20.701 "product_name": "Malloc disk", 00:12:20.701 "block_size": 512, 00:12:20.701 "num_blocks": 65536, 00:12:20.701 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:20.701 "assigned_rate_limits": { 00:12:20.701 "rw_ios_per_sec": 0, 00:12:20.701 "rw_mbytes_per_sec": 0, 00:12:20.701 "r_mbytes_per_sec": 0, 00:12:20.701 "w_mbytes_per_sec": 0 00:12:20.701 }, 00:12:20.701 "claimed": false, 00:12:20.701 "zoned": false, 00:12:20.701 "supported_io_types": { 00:12:20.701 "read": true, 00:12:20.701 "write": true, 00:12:20.701 "unmap": true, 00:12:20.701 "flush": true, 00:12:20.702 "reset": true, 00:12:20.702 "nvme_admin": false, 00:12:20.702 "nvme_io": false, 00:12:20.702 "nvme_io_md": false, 00:12:20.702 "write_zeroes": true, 00:12:20.702 "zcopy": true, 00:12:20.702 "get_zone_info": false, 00:12:20.702 "zone_management": false, 00:12:20.702 "zone_append": false, 00:12:20.702 "compare": false, 00:12:20.702 "compare_and_write": false, 00:12:20.702 "abort": true, 00:12:20.702 "seek_hole": false, 00:12:20.702 "seek_data": false, 00:12:20.702 "copy": true, 00:12:20.702 "nvme_iov_md": false 00:12:20.702 }, 00:12:20.702 "memory_domains": [ 00:12:20.702 { 00:12:20.702 "dma_device_id": "system", 00:12:20.702 "dma_device_type": 1 00:12:20.702 }, 00:12:20.702 { 00:12:20.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.702 "dma_device_type": 2 00:12:20.702 } 00:12:20.702 ], 00:12:20.702 "driver_specific": {} 00:12:20.702 } 00:12:20.702 ] 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.702 [2024-11-20 17:46:47.673278] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.702 [2024-11-20 17:46:47.673334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.702 [2024-11-20 17:46:47.673356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.702 [2024-11-20 17:46:47.675455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.702 [2024-11-20 17:46:47.675505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.702 "name": "Existed_Raid", 00:12:20.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.702 "strip_size_kb": 0, 00:12:20.702 "state": "configuring", 00:12:20.702 "raid_level": "raid1", 00:12:20.702 "superblock": false, 00:12:20.702 "num_base_bdevs": 4, 00:12:20.702 "num_base_bdevs_discovered": 3, 00:12:20.702 "num_base_bdevs_operational": 4, 00:12:20.702 "base_bdevs_list": [ 00:12:20.702 { 00:12:20.702 "name": "BaseBdev1", 00:12:20.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.702 "is_configured": false, 00:12:20.702 "data_offset": 0, 00:12:20.702 "data_size": 0 00:12:20.702 }, 00:12:20.702 { 00:12:20.702 "name": "BaseBdev2", 00:12:20.702 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:20.702 "is_configured": true, 00:12:20.702 "data_offset": 0, 00:12:20.702 "data_size": 65536 00:12:20.702 }, 00:12:20.702 { 00:12:20.702 "name": "BaseBdev3", 00:12:20.702 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:20.702 "is_configured": true, 00:12:20.702 "data_offset": 0, 00:12:20.702 "data_size": 65536 00:12:20.702 }, 00:12:20.702 { 00:12:20.702 "name": "BaseBdev4", 00:12:20.702 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:20.702 "is_configured": true, 00:12:20.702 "data_offset": 0, 00:12:20.702 "data_size": 65536 00:12:20.702 } 00:12:20.702 ] 00:12:20.702 }' 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.702 17:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 [2024-11-20 17:46:48.156652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.270 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.270 "name": "Existed_Raid", 00:12:21.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.270 "strip_size_kb": 0, 00:12:21.270 "state": "configuring", 00:12:21.271 "raid_level": "raid1", 00:12:21.271 "superblock": false, 00:12:21.271 "num_base_bdevs": 4, 00:12:21.271 "num_base_bdevs_discovered": 2, 00:12:21.271 "num_base_bdevs_operational": 4, 00:12:21.271 "base_bdevs_list": [ 00:12:21.271 { 00:12:21.271 "name": "BaseBdev1", 00:12:21.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.271 "is_configured": false, 00:12:21.271 "data_offset": 0, 00:12:21.271 "data_size": 0 00:12:21.271 }, 00:12:21.271 { 00:12:21.271 "name": null, 00:12:21.271 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:21.271 "is_configured": false, 00:12:21.271 "data_offset": 0, 00:12:21.271 "data_size": 65536 00:12:21.271 }, 00:12:21.271 { 00:12:21.271 "name": "BaseBdev3", 00:12:21.271 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:21.271 "is_configured": true, 00:12:21.271 "data_offset": 0, 00:12:21.271 "data_size": 65536 00:12:21.271 }, 00:12:21.271 { 00:12:21.271 "name": "BaseBdev4", 00:12:21.271 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:21.271 "is_configured": true, 00:12:21.271 "data_offset": 0, 00:12:21.271 "data_size": 65536 00:12:21.271 } 00:12:21.271 ] 00:12:21.271 }' 00:12:21.271 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.271 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.530 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.531 [2024-11-20 17:46:48.668118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.531 BaseBdev1 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.531 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.531 [ 00:12:21.531 { 00:12:21.531 "name": "BaseBdev1", 00:12:21.531 "aliases": [ 00:12:21.531 "9322c425-7065-43a5-a477-ac4a982cd5f2" 00:12:21.531 ], 00:12:21.531 "product_name": "Malloc disk", 00:12:21.531 "block_size": 512, 00:12:21.531 "num_blocks": 65536, 00:12:21.531 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:21.531 "assigned_rate_limits": { 00:12:21.531 "rw_ios_per_sec": 0, 00:12:21.531 "rw_mbytes_per_sec": 0, 00:12:21.531 "r_mbytes_per_sec": 0, 00:12:21.531 "w_mbytes_per_sec": 0 00:12:21.531 }, 00:12:21.531 "claimed": true, 00:12:21.531 "claim_type": "exclusive_write", 00:12:21.531 "zoned": false, 00:12:21.531 "supported_io_types": { 00:12:21.531 "read": true, 00:12:21.531 "write": true, 00:12:21.531 "unmap": true, 00:12:21.531 "flush": true, 00:12:21.531 "reset": true, 00:12:21.531 "nvme_admin": false, 00:12:21.531 "nvme_io": false, 00:12:21.531 "nvme_io_md": false, 00:12:21.531 "write_zeroes": true, 00:12:21.531 "zcopy": true, 00:12:21.531 "get_zone_info": false, 00:12:21.531 "zone_management": false, 00:12:21.531 "zone_append": false, 00:12:21.531 "compare": false, 00:12:21.531 "compare_and_write": false, 00:12:21.531 "abort": true, 00:12:21.531 "seek_hole": false, 00:12:21.531 "seek_data": false, 00:12:21.531 "copy": true, 00:12:21.531 "nvme_iov_md": false 00:12:21.531 }, 00:12:21.531 "memory_domains": [ 00:12:21.791 { 00:12:21.791 "dma_device_id": "system", 00:12:21.791 "dma_device_type": 1 00:12:21.791 }, 00:12:21.791 { 00:12:21.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.791 "dma_device_type": 2 00:12:21.791 } 00:12:21.791 ], 00:12:21.791 "driver_specific": {} 00:12:21.791 } 00:12:21.791 ] 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.791 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.791 "name": "Existed_Raid", 00:12:21.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.791 "strip_size_kb": 0, 00:12:21.792 "state": "configuring", 00:12:21.792 "raid_level": "raid1", 00:12:21.792 "superblock": false, 00:12:21.792 "num_base_bdevs": 4, 00:12:21.792 "num_base_bdevs_discovered": 3, 00:12:21.792 "num_base_bdevs_operational": 4, 00:12:21.792 "base_bdevs_list": [ 00:12:21.792 { 00:12:21.792 "name": "BaseBdev1", 00:12:21.792 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:21.792 "is_configured": true, 00:12:21.792 "data_offset": 0, 00:12:21.792 "data_size": 65536 00:12:21.792 }, 00:12:21.792 { 00:12:21.792 "name": null, 00:12:21.792 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:21.792 "is_configured": false, 00:12:21.792 "data_offset": 0, 00:12:21.792 "data_size": 65536 00:12:21.792 }, 00:12:21.792 { 00:12:21.792 "name": "BaseBdev3", 00:12:21.792 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:21.792 "is_configured": true, 00:12:21.792 "data_offset": 0, 00:12:21.792 "data_size": 65536 00:12:21.792 }, 00:12:21.792 { 00:12:21.792 "name": "BaseBdev4", 00:12:21.792 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:21.792 "is_configured": true, 00:12:21.792 "data_offset": 0, 00:12:21.792 "data_size": 65536 00:12:21.792 } 00:12:21.792 ] 00:12:21.792 }' 00:12:21.792 17:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.792 17:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 [2024-11-20 17:46:49.167406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.051 "name": "Existed_Raid", 00:12:22.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.051 "strip_size_kb": 0, 00:12:22.051 "state": "configuring", 00:12:22.051 "raid_level": "raid1", 00:12:22.051 "superblock": false, 00:12:22.051 "num_base_bdevs": 4, 00:12:22.051 "num_base_bdevs_discovered": 2, 00:12:22.051 "num_base_bdevs_operational": 4, 00:12:22.051 "base_bdevs_list": [ 00:12:22.052 { 00:12:22.052 "name": "BaseBdev1", 00:12:22.052 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:22.052 "is_configured": true, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "name": null, 00:12:22.052 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:22.052 "is_configured": false, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "name": null, 00:12:22.052 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:22.052 "is_configured": false, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "name": "BaseBdev4", 00:12:22.052 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:22.052 "is_configured": true, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 } 00:12:22.052 ] 00:12:22.052 }' 00:12:22.052 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.052 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.619 [2024-11-20 17:46:49.670509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.619 "name": "Existed_Raid", 00:12:22.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.619 "strip_size_kb": 0, 00:12:22.619 "state": "configuring", 00:12:22.619 "raid_level": "raid1", 00:12:22.619 "superblock": false, 00:12:22.619 "num_base_bdevs": 4, 00:12:22.619 "num_base_bdevs_discovered": 3, 00:12:22.619 "num_base_bdevs_operational": 4, 00:12:22.619 "base_bdevs_list": [ 00:12:22.619 { 00:12:22.619 "name": "BaseBdev1", 00:12:22.619 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:22.619 "is_configured": true, 00:12:22.619 "data_offset": 0, 00:12:22.619 "data_size": 65536 00:12:22.619 }, 00:12:22.619 { 00:12:22.619 "name": null, 00:12:22.619 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:22.619 "is_configured": false, 00:12:22.619 "data_offset": 0, 00:12:22.619 "data_size": 65536 00:12:22.619 }, 00:12:22.619 { 00:12:22.619 "name": "BaseBdev3", 00:12:22.619 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:22.619 "is_configured": true, 00:12:22.619 "data_offset": 0, 00:12:22.619 "data_size": 65536 00:12:22.619 }, 00:12:22.619 { 00:12:22.619 "name": "BaseBdev4", 00:12:22.619 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:22.619 "is_configured": true, 00:12:22.619 "data_offset": 0, 00:12:22.619 "data_size": 65536 00:12:22.619 } 00:12:22.619 ] 00:12:22.619 }' 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.619 17:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.186 [2024-11-20 17:46:50.161737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.186 "name": "Existed_Raid", 00:12:23.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.186 "strip_size_kb": 0, 00:12:23.186 "state": "configuring", 00:12:23.186 "raid_level": "raid1", 00:12:23.186 "superblock": false, 00:12:23.186 "num_base_bdevs": 4, 00:12:23.186 "num_base_bdevs_discovered": 2, 00:12:23.186 "num_base_bdevs_operational": 4, 00:12:23.186 "base_bdevs_list": [ 00:12:23.186 { 00:12:23.186 "name": null, 00:12:23.186 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:23.186 "is_configured": false, 00:12:23.186 "data_offset": 0, 00:12:23.186 "data_size": 65536 00:12:23.186 }, 00:12:23.186 { 00:12:23.186 "name": null, 00:12:23.186 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:23.186 "is_configured": false, 00:12:23.186 "data_offset": 0, 00:12:23.186 "data_size": 65536 00:12:23.186 }, 00:12:23.186 { 00:12:23.186 "name": "BaseBdev3", 00:12:23.186 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:23.186 "is_configured": true, 00:12:23.186 "data_offset": 0, 00:12:23.186 "data_size": 65536 00:12:23.186 }, 00:12:23.186 { 00:12:23.186 "name": "BaseBdev4", 00:12:23.186 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:23.186 "is_configured": true, 00:12:23.186 "data_offset": 0, 00:12:23.186 "data_size": 65536 00:12:23.186 } 00:12:23.186 ] 00:12:23.186 }' 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.186 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.754 [2024-11-20 17:46:50.757007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.754 "name": "Existed_Raid", 00:12:23.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.754 "strip_size_kb": 0, 00:12:23.754 "state": "configuring", 00:12:23.754 "raid_level": "raid1", 00:12:23.754 "superblock": false, 00:12:23.754 "num_base_bdevs": 4, 00:12:23.754 "num_base_bdevs_discovered": 3, 00:12:23.754 "num_base_bdevs_operational": 4, 00:12:23.754 "base_bdevs_list": [ 00:12:23.754 { 00:12:23.754 "name": null, 00:12:23.754 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:23.754 "is_configured": false, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 }, 00:12:23.754 { 00:12:23.754 "name": "BaseBdev2", 00:12:23.754 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:23.754 "is_configured": true, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 }, 00:12:23.754 { 00:12:23.754 "name": "BaseBdev3", 00:12:23.754 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:23.754 "is_configured": true, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 }, 00:12:23.754 { 00:12:23.754 "name": "BaseBdev4", 00:12:23.754 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:23.754 "is_configured": true, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 } 00:12:23.754 ] 00:12:23.754 }' 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.754 17:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9322c425-7065-43a5-a477-ac4a982cd5f2 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.322 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.322 [2024-11-20 17:46:51.329055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:24.322 [2024-11-20 17:46:51.329116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:24.322 [2024-11-20 17:46:51.329126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:24.322 [2024-11-20 17:46:51.329420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:24.322 [2024-11-20 17:46:51.329605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:24.322 [2024-11-20 17:46:51.329622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:24.323 [2024-11-20 17:46:51.329899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.323 NewBaseBdev 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.323 [ 00:12:24.323 { 00:12:24.323 "name": "NewBaseBdev", 00:12:24.323 "aliases": [ 00:12:24.323 "9322c425-7065-43a5-a477-ac4a982cd5f2" 00:12:24.323 ], 00:12:24.323 "product_name": "Malloc disk", 00:12:24.323 "block_size": 512, 00:12:24.323 "num_blocks": 65536, 00:12:24.323 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:24.323 "assigned_rate_limits": { 00:12:24.323 "rw_ios_per_sec": 0, 00:12:24.323 "rw_mbytes_per_sec": 0, 00:12:24.323 "r_mbytes_per_sec": 0, 00:12:24.323 "w_mbytes_per_sec": 0 00:12:24.323 }, 00:12:24.323 "claimed": true, 00:12:24.323 "claim_type": "exclusive_write", 00:12:24.323 "zoned": false, 00:12:24.323 "supported_io_types": { 00:12:24.323 "read": true, 00:12:24.323 "write": true, 00:12:24.323 "unmap": true, 00:12:24.323 "flush": true, 00:12:24.323 "reset": true, 00:12:24.323 "nvme_admin": false, 00:12:24.323 "nvme_io": false, 00:12:24.323 "nvme_io_md": false, 00:12:24.323 "write_zeroes": true, 00:12:24.323 "zcopy": true, 00:12:24.323 "get_zone_info": false, 00:12:24.323 "zone_management": false, 00:12:24.323 "zone_append": false, 00:12:24.323 "compare": false, 00:12:24.323 "compare_and_write": false, 00:12:24.323 "abort": true, 00:12:24.323 "seek_hole": false, 00:12:24.323 "seek_data": false, 00:12:24.323 "copy": true, 00:12:24.323 "nvme_iov_md": false 00:12:24.323 }, 00:12:24.323 "memory_domains": [ 00:12:24.323 { 00:12:24.323 "dma_device_id": "system", 00:12:24.323 "dma_device_type": 1 00:12:24.323 }, 00:12:24.323 { 00:12:24.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.323 "dma_device_type": 2 00:12:24.323 } 00:12:24.323 ], 00:12:24.323 "driver_specific": {} 00:12:24.323 } 00:12:24.323 ] 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.323 "name": "Existed_Raid", 00:12:24.323 "uuid": "c248b2da-2fa1-4e6a-ba14-ed26ed173a42", 00:12:24.323 "strip_size_kb": 0, 00:12:24.323 "state": "online", 00:12:24.323 "raid_level": "raid1", 00:12:24.323 "superblock": false, 00:12:24.323 "num_base_bdevs": 4, 00:12:24.323 "num_base_bdevs_discovered": 4, 00:12:24.323 "num_base_bdevs_operational": 4, 00:12:24.323 "base_bdevs_list": [ 00:12:24.323 { 00:12:24.323 "name": "NewBaseBdev", 00:12:24.323 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:24.323 "is_configured": true, 00:12:24.323 "data_offset": 0, 00:12:24.323 "data_size": 65536 00:12:24.323 }, 00:12:24.323 { 00:12:24.323 "name": "BaseBdev2", 00:12:24.323 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:24.323 "is_configured": true, 00:12:24.323 "data_offset": 0, 00:12:24.323 "data_size": 65536 00:12:24.323 }, 00:12:24.323 { 00:12:24.323 "name": "BaseBdev3", 00:12:24.323 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:24.323 "is_configured": true, 00:12:24.323 "data_offset": 0, 00:12:24.323 "data_size": 65536 00:12:24.323 }, 00:12:24.323 { 00:12:24.323 "name": "BaseBdev4", 00:12:24.323 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:24.323 "is_configured": true, 00:12:24.323 "data_offset": 0, 00:12:24.323 "data_size": 65536 00:12:24.323 } 00:12:24.323 ] 00:12:24.323 }' 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.323 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.892 [2024-11-20 17:46:51.796808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.892 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:24.892 "name": "Existed_Raid", 00:12:24.892 "aliases": [ 00:12:24.892 "c248b2da-2fa1-4e6a-ba14-ed26ed173a42" 00:12:24.892 ], 00:12:24.892 "product_name": "Raid Volume", 00:12:24.892 "block_size": 512, 00:12:24.892 "num_blocks": 65536, 00:12:24.892 "uuid": "c248b2da-2fa1-4e6a-ba14-ed26ed173a42", 00:12:24.892 "assigned_rate_limits": { 00:12:24.892 "rw_ios_per_sec": 0, 00:12:24.892 "rw_mbytes_per_sec": 0, 00:12:24.892 "r_mbytes_per_sec": 0, 00:12:24.892 "w_mbytes_per_sec": 0 00:12:24.892 }, 00:12:24.892 "claimed": false, 00:12:24.892 "zoned": false, 00:12:24.892 "supported_io_types": { 00:12:24.892 "read": true, 00:12:24.892 "write": true, 00:12:24.892 "unmap": false, 00:12:24.892 "flush": false, 00:12:24.892 "reset": true, 00:12:24.892 "nvme_admin": false, 00:12:24.892 "nvme_io": false, 00:12:24.892 "nvme_io_md": false, 00:12:24.893 "write_zeroes": true, 00:12:24.893 "zcopy": false, 00:12:24.893 "get_zone_info": false, 00:12:24.893 "zone_management": false, 00:12:24.893 "zone_append": false, 00:12:24.893 "compare": false, 00:12:24.893 "compare_and_write": false, 00:12:24.893 "abort": false, 00:12:24.893 "seek_hole": false, 00:12:24.893 "seek_data": false, 00:12:24.893 "copy": false, 00:12:24.893 "nvme_iov_md": false 00:12:24.893 }, 00:12:24.893 "memory_domains": [ 00:12:24.893 { 00:12:24.893 "dma_device_id": "system", 00:12:24.893 "dma_device_type": 1 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.893 "dma_device_type": 2 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "system", 00:12:24.893 "dma_device_type": 1 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.893 "dma_device_type": 2 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "system", 00:12:24.893 "dma_device_type": 1 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.893 "dma_device_type": 2 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "system", 00:12:24.893 "dma_device_type": 1 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.893 "dma_device_type": 2 00:12:24.893 } 00:12:24.893 ], 00:12:24.893 "driver_specific": { 00:12:24.893 "raid": { 00:12:24.893 "uuid": "c248b2da-2fa1-4e6a-ba14-ed26ed173a42", 00:12:24.893 "strip_size_kb": 0, 00:12:24.893 "state": "online", 00:12:24.893 "raid_level": "raid1", 00:12:24.893 "superblock": false, 00:12:24.893 "num_base_bdevs": 4, 00:12:24.893 "num_base_bdevs_discovered": 4, 00:12:24.893 "num_base_bdevs_operational": 4, 00:12:24.893 "base_bdevs_list": [ 00:12:24.893 { 00:12:24.893 "name": "NewBaseBdev", 00:12:24.893 "uuid": "9322c425-7065-43a5-a477-ac4a982cd5f2", 00:12:24.893 "is_configured": true, 00:12:24.893 "data_offset": 0, 00:12:24.893 "data_size": 65536 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "name": "BaseBdev2", 00:12:24.893 "uuid": "1b9e2b03-0b40-4e57-b087-e8f08c627a77", 00:12:24.893 "is_configured": true, 00:12:24.893 "data_offset": 0, 00:12:24.893 "data_size": 65536 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "name": "BaseBdev3", 00:12:24.893 "uuid": "f579401c-0a6d-49b1-99b7-30c85da4e1ef", 00:12:24.893 "is_configured": true, 00:12:24.893 "data_offset": 0, 00:12:24.893 "data_size": 65536 00:12:24.893 }, 00:12:24.893 { 00:12:24.893 "name": "BaseBdev4", 00:12:24.893 "uuid": "4f7a22d1-8a85-483f-bb11-7c8932351260", 00:12:24.893 "is_configured": true, 00:12:24.893 "data_offset": 0, 00:12:24.893 "data_size": 65536 00:12:24.893 } 00:12:24.893 ] 00:12:24.893 } 00:12:24.893 } 00:12:24.893 }' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:24.893 BaseBdev2 00:12:24.893 BaseBdev3 00:12:24.893 BaseBdev4' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.893 17:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.893 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.153 [2024-11-20 17:46:52.131903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:25.153 [2024-11-20 17:46:52.131955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.153 [2024-11-20 17:46:52.132093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.153 [2024-11-20 17:46:52.132433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.153 [2024-11-20 17:46:52.132455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73619 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73619 ']' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73619 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73619 00:12:25.153 killing process with pid 73619 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73619' 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73619 00:12:25.153 [2024-11-20 17:46:52.167112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.153 17:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73619 00:12:25.747 [2024-11-20 17:46:52.613630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.685 ************************************ 00:12:26.685 END TEST raid_state_function_test 00:12:26.685 ************************************ 00:12:26.685 17:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:26.685 00:12:26.685 real 0m11.862s 00:12:26.685 user 0m18.627s 00:12:26.685 sys 0m2.050s 00:12:26.685 17:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.685 17:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.946 17:46:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:26.946 17:46:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.946 17:46:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.946 17:46:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.946 ************************************ 00:12:26.946 START TEST raid_state_function_test_sb 00:12:26.946 ************************************ 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74296 00:12:26.946 Process raid pid: 74296 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74296' 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74296 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74296 ']' 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:26.946 17:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.946 [2024-11-20 17:46:53.998979] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:26.946 [2024-11-20 17:46:53.999119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.207 [2024-11-20 17:46:54.180087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.207 [2024-11-20 17:46:54.322690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.467 [2024-11-20 17:46:54.563037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.467 [2024-11-20 17:46:54.563085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.726 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.726 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:27.726 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:27.726 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.726 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.726 [2024-11-20 17:46:54.878278] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.727 [2024-11-20 17:46:54.878369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.727 [2024-11-20 17:46:54.878381] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.727 [2024-11-20 17:46:54.878392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.727 [2024-11-20 17:46:54.878399] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:27.727 [2024-11-20 17:46:54.878408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:27.727 [2024-11-20 17:46:54.878415] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:27.727 [2024-11-20 17:46:54.878424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.727 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.985 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.985 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.985 "name": "Existed_Raid", 00:12:27.985 "uuid": "375acb11-2c88-4e58-a4d5-773d1f4cbb59", 00:12:27.985 "strip_size_kb": 0, 00:12:27.985 "state": "configuring", 00:12:27.985 "raid_level": "raid1", 00:12:27.985 "superblock": true, 00:12:27.985 "num_base_bdevs": 4, 00:12:27.985 "num_base_bdevs_discovered": 0, 00:12:27.985 "num_base_bdevs_operational": 4, 00:12:27.985 "base_bdevs_list": [ 00:12:27.985 { 00:12:27.985 "name": "BaseBdev1", 00:12:27.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.985 "is_configured": false, 00:12:27.985 "data_offset": 0, 00:12:27.985 "data_size": 0 00:12:27.985 }, 00:12:27.985 { 00:12:27.985 "name": "BaseBdev2", 00:12:27.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.985 "is_configured": false, 00:12:27.985 "data_offset": 0, 00:12:27.985 "data_size": 0 00:12:27.985 }, 00:12:27.985 { 00:12:27.985 "name": "BaseBdev3", 00:12:27.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.985 "is_configured": false, 00:12:27.985 "data_offset": 0, 00:12:27.985 "data_size": 0 00:12:27.985 }, 00:12:27.985 { 00:12:27.985 "name": "BaseBdev4", 00:12:27.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.985 "is_configured": false, 00:12:27.985 "data_offset": 0, 00:12:27.985 "data_size": 0 00:12:27.985 } 00:12:27.985 ] 00:12:27.985 }' 00:12:27.985 17:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.985 17:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 [2024-11-20 17:46:55.341456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.245 [2024-11-20 17:46:55.341517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 [2024-11-20 17:46:55.353391] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.245 [2024-11-20 17:46:55.353441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.245 [2024-11-20 17:46:55.353450] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.245 [2024-11-20 17:46:55.353460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.245 [2024-11-20 17:46:55.353467] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.245 [2024-11-20 17:46:55.353476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.245 [2024-11-20 17:46:55.353482] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.245 [2024-11-20 17:46:55.353492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 [2024-11-20 17:46:55.408370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.245 BaseBdev1 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.245 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.504 [ 00:12:28.504 { 00:12:28.504 "name": "BaseBdev1", 00:12:28.504 "aliases": [ 00:12:28.504 "32551e8c-bab3-4de4-8d98-00271aa5cd8c" 00:12:28.504 ], 00:12:28.504 "product_name": "Malloc disk", 00:12:28.504 "block_size": 512, 00:12:28.504 "num_blocks": 65536, 00:12:28.504 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:28.504 "assigned_rate_limits": { 00:12:28.504 "rw_ios_per_sec": 0, 00:12:28.504 "rw_mbytes_per_sec": 0, 00:12:28.504 "r_mbytes_per_sec": 0, 00:12:28.504 "w_mbytes_per_sec": 0 00:12:28.504 }, 00:12:28.504 "claimed": true, 00:12:28.504 "claim_type": "exclusive_write", 00:12:28.504 "zoned": false, 00:12:28.504 "supported_io_types": { 00:12:28.504 "read": true, 00:12:28.504 "write": true, 00:12:28.504 "unmap": true, 00:12:28.504 "flush": true, 00:12:28.504 "reset": true, 00:12:28.504 "nvme_admin": false, 00:12:28.504 "nvme_io": false, 00:12:28.504 "nvme_io_md": false, 00:12:28.504 "write_zeroes": true, 00:12:28.504 "zcopy": true, 00:12:28.504 "get_zone_info": false, 00:12:28.504 "zone_management": false, 00:12:28.504 "zone_append": false, 00:12:28.504 "compare": false, 00:12:28.504 "compare_and_write": false, 00:12:28.504 "abort": true, 00:12:28.504 "seek_hole": false, 00:12:28.504 "seek_data": false, 00:12:28.504 "copy": true, 00:12:28.504 "nvme_iov_md": false 00:12:28.504 }, 00:12:28.504 "memory_domains": [ 00:12:28.504 { 00:12:28.504 "dma_device_id": "system", 00:12:28.504 "dma_device_type": 1 00:12:28.504 }, 00:12:28.504 { 00:12:28.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.504 "dma_device_type": 2 00:12:28.504 } 00:12:28.504 ], 00:12:28.504 "driver_specific": {} 00:12:28.504 } 00:12:28.504 ] 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.504 "name": "Existed_Raid", 00:12:28.504 "uuid": "f3c8d736-51bc-49bc-9083-64be06c6f830", 00:12:28.504 "strip_size_kb": 0, 00:12:28.504 "state": "configuring", 00:12:28.504 "raid_level": "raid1", 00:12:28.504 "superblock": true, 00:12:28.504 "num_base_bdevs": 4, 00:12:28.504 "num_base_bdevs_discovered": 1, 00:12:28.504 "num_base_bdevs_operational": 4, 00:12:28.504 "base_bdevs_list": [ 00:12:28.504 { 00:12:28.504 "name": "BaseBdev1", 00:12:28.504 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:28.504 "is_configured": true, 00:12:28.504 "data_offset": 2048, 00:12:28.504 "data_size": 63488 00:12:28.504 }, 00:12:28.504 { 00:12:28.504 "name": "BaseBdev2", 00:12:28.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.504 "is_configured": false, 00:12:28.504 "data_offset": 0, 00:12:28.504 "data_size": 0 00:12:28.504 }, 00:12:28.504 { 00:12:28.504 "name": "BaseBdev3", 00:12:28.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.504 "is_configured": false, 00:12:28.504 "data_offset": 0, 00:12:28.504 "data_size": 0 00:12:28.504 }, 00:12:28.504 { 00:12:28.504 "name": "BaseBdev4", 00:12:28.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.504 "is_configured": false, 00:12:28.504 "data_offset": 0, 00:12:28.504 "data_size": 0 00:12:28.504 } 00:12:28.504 ] 00:12:28.504 }' 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.504 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 [2024-11-20 17:46:55.863741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.763 [2024-11-20 17:46:55.863824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 [2024-11-20 17:46:55.875743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.763 [2024-11-20 17:46:55.877868] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.763 [2024-11-20 17:46:55.877916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.763 [2024-11-20 17:46:55.877927] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.763 [2024-11-20 17:46:55.877937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.763 [2024-11-20 17:46:55.877945] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.763 [2024-11-20 17:46:55.877953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.763 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.763 "name": "Existed_Raid", 00:12:28.763 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:28.763 "strip_size_kb": 0, 00:12:28.763 "state": "configuring", 00:12:28.763 "raid_level": "raid1", 00:12:28.763 "superblock": true, 00:12:28.763 "num_base_bdevs": 4, 00:12:28.763 "num_base_bdevs_discovered": 1, 00:12:28.763 "num_base_bdevs_operational": 4, 00:12:28.763 "base_bdevs_list": [ 00:12:28.763 { 00:12:28.763 "name": "BaseBdev1", 00:12:28.763 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:28.763 "is_configured": true, 00:12:28.763 "data_offset": 2048, 00:12:28.763 "data_size": 63488 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "name": "BaseBdev2", 00:12:28.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.763 "is_configured": false, 00:12:28.763 "data_offset": 0, 00:12:28.763 "data_size": 0 00:12:28.763 }, 00:12:28.764 { 00:12:28.764 "name": "BaseBdev3", 00:12:28.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.764 "is_configured": false, 00:12:28.764 "data_offset": 0, 00:12:28.764 "data_size": 0 00:12:28.764 }, 00:12:28.764 { 00:12:28.764 "name": "BaseBdev4", 00:12:28.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.764 "is_configured": false, 00:12:28.764 "data_offset": 0, 00:12:28.764 "data_size": 0 00:12:28.764 } 00:12:28.764 ] 00:12:28.764 }' 00:12:28.764 17:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.764 17:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 [2024-11-20 17:46:56.360143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.331 BaseBdev2 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 [ 00:12:29.331 { 00:12:29.331 "name": "BaseBdev2", 00:12:29.331 "aliases": [ 00:12:29.331 "4b45aef5-68d0-4377-a9ec-0b7071202f82" 00:12:29.331 ], 00:12:29.331 "product_name": "Malloc disk", 00:12:29.331 "block_size": 512, 00:12:29.331 "num_blocks": 65536, 00:12:29.331 "uuid": "4b45aef5-68d0-4377-a9ec-0b7071202f82", 00:12:29.331 "assigned_rate_limits": { 00:12:29.331 "rw_ios_per_sec": 0, 00:12:29.331 "rw_mbytes_per_sec": 0, 00:12:29.331 "r_mbytes_per_sec": 0, 00:12:29.331 "w_mbytes_per_sec": 0 00:12:29.331 }, 00:12:29.331 "claimed": true, 00:12:29.331 "claim_type": "exclusive_write", 00:12:29.331 "zoned": false, 00:12:29.331 "supported_io_types": { 00:12:29.331 "read": true, 00:12:29.331 "write": true, 00:12:29.331 "unmap": true, 00:12:29.331 "flush": true, 00:12:29.331 "reset": true, 00:12:29.331 "nvme_admin": false, 00:12:29.331 "nvme_io": false, 00:12:29.331 "nvme_io_md": false, 00:12:29.331 "write_zeroes": true, 00:12:29.331 "zcopy": true, 00:12:29.331 "get_zone_info": false, 00:12:29.331 "zone_management": false, 00:12:29.331 "zone_append": false, 00:12:29.331 "compare": false, 00:12:29.331 "compare_and_write": false, 00:12:29.331 "abort": true, 00:12:29.331 "seek_hole": false, 00:12:29.331 "seek_data": false, 00:12:29.331 "copy": true, 00:12:29.331 "nvme_iov_md": false 00:12:29.331 }, 00:12:29.331 "memory_domains": [ 00:12:29.331 { 00:12:29.331 "dma_device_id": "system", 00:12:29.331 "dma_device_type": 1 00:12:29.331 }, 00:12:29.331 { 00:12:29.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.331 "dma_device_type": 2 00:12:29.331 } 00:12:29.331 ], 00:12:29.331 "driver_specific": {} 00:12:29.331 } 00:12:29.331 ] 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.331 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.331 "name": "Existed_Raid", 00:12:29.331 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:29.331 "strip_size_kb": 0, 00:12:29.331 "state": "configuring", 00:12:29.331 "raid_level": "raid1", 00:12:29.331 "superblock": true, 00:12:29.331 "num_base_bdevs": 4, 00:12:29.331 "num_base_bdevs_discovered": 2, 00:12:29.331 "num_base_bdevs_operational": 4, 00:12:29.331 "base_bdevs_list": [ 00:12:29.331 { 00:12:29.331 "name": "BaseBdev1", 00:12:29.331 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:29.331 "is_configured": true, 00:12:29.331 "data_offset": 2048, 00:12:29.331 "data_size": 63488 00:12:29.331 }, 00:12:29.331 { 00:12:29.331 "name": "BaseBdev2", 00:12:29.331 "uuid": "4b45aef5-68d0-4377-a9ec-0b7071202f82", 00:12:29.331 "is_configured": true, 00:12:29.331 "data_offset": 2048, 00:12:29.331 "data_size": 63488 00:12:29.331 }, 00:12:29.331 { 00:12:29.331 "name": "BaseBdev3", 00:12:29.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.331 "is_configured": false, 00:12:29.331 "data_offset": 0, 00:12:29.331 "data_size": 0 00:12:29.331 }, 00:12:29.331 { 00:12:29.331 "name": "BaseBdev4", 00:12:29.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.332 "is_configured": false, 00:12:29.332 "data_offset": 0, 00:12:29.332 "data_size": 0 00:12:29.332 } 00:12:29.332 ] 00:12:29.332 }' 00:12:29.332 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.332 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.897 [2024-11-20 17:46:56.858426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.897 BaseBdev3 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.897 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.898 [ 00:12:29.898 { 00:12:29.898 "name": "BaseBdev3", 00:12:29.898 "aliases": [ 00:12:29.898 "5e4bcb1f-bffc-4d7d-92e5-e7e49e313207" 00:12:29.898 ], 00:12:29.898 "product_name": "Malloc disk", 00:12:29.898 "block_size": 512, 00:12:29.898 "num_blocks": 65536, 00:12:29.898 "uuid": "5e4bcb1f-bffc-4d7d-92e5-e7e49e313207", 00:12:29.898 "assigned_rate_limits": { 00:12:29.898 "rw_ios_per_sec": 0, 00:12:29.898 "rw_mbytes_per_sec": 0, 00:12:29.898 "r_mbytes_per_sec": 0, 00:12:29.898 "w_mbytes_per_sec": 0 00:12:29.898 }, 00:12:29.898 "claimed": true, 00:12:29.898 "claim_type": "exclusive_write", 00:12:29.898 "zoned": false, 00:12:29.898 "supported_io_types": { 00:12:29.898 "read": true, 00:12:29.898 "write": true, 00:12:29.898 "unmap": true, 00:12:29.898 "flush": true, 00:12:29.898 "reset": true, 00:12:29.898 "nvme_admin": false, 00:12:29.898 "nvme_io": false, 00:12:29.898 "nvme_io_md": false, 00:12:29.898 "write_zeroes": true, 00:12:29.898 "zcopy": true, 00:12:29.898 "get_zone_info": false, 00:12:29.898 "zone_management": false, 00:12:29.898 "zone_append": false, 00:12:29.898 "compare": false, 00:12:29.898 "compare_and_write": false, 00:12:29.898 "abort": true, 00:12:29.898 "seek_hole": false, 00:12:29.898 "seek_data": false, 00:12:29.898 "copy": true, 00:12:29.898 "nvme_iov_md": false 00:12:29.898 }, 00:12:29.898 "memory_domains": [ 00:12:29.898 { 00:12:29.898 "dma_device_id": "system", 00:12:29.898 "dma_device_type": 1 00:12:29.898 }, 00:12:29.898 { 00:12:29.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.898 "dma_device_type": 2 00:12:29.898 } 00:12:29.898 ], 00:12:29.898 "driver_specific": {} 00:12:29.898 } 00:12:29.898 ] 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.898 "name": "Existed_Raid", 00:12:29.898 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:29.898 "strip_size_kb": 0, 00:12:29.898 "state": "configuring", 00:12:29.898 "raid_level": "raid1", 00:12:29.898 "superblock": true, 00:12:29.898 "num_base_bdevs": 4, 00:12:29.898 "num_base_bdevs_discovered": 3, 00:12:29.898 "num_base_bdevs_operational": 4, 00:12:29.898 "base_bdevs_list": [ 00:12:29.898 { 00:12:29.898 "name": "BaseBdev1", 00:12:29.898 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:29.898 "is_configured": true, 00:12:29.898 "data_offset": 2048, 00:12:29.898 "data_size": 63488 00:12:29.898 }, 00:12:29.898 { 00:12:29.898 "name": "BaseBdev2", 00:12:29.898 "uuid": "4b45aef5-68d0-4377-a9ec-0b7071202f82", 00:12:29.898 "is_configured": true, 00:12:29.898 "data_offset": 2048, 00:12:29.898 "data_size": 63488 00:12:29.898 }, 00:12:29.898 { 00:12:29.898 "name": "BaseBdev3", 00:12:29.898 "uuid": "5e4bcb1f-bffc-4d7d-92e5-e7e49e313207", 00:12:29.898 "is_configured": true, 00:12:29.898 "data_offset": 2048, 00:12:29.898 "data_size": 63488 00:12:29.898 }, 00:12:29.898 { 00:12:29.898 "name": "BaseBdev4", 00:12:29.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.898 "is_configured": false, 00:12:29.898 "data_offset": 0, 00:12:29.898 "data_size": 0 00:12:29.898 } 00:12:29.898 ] 00:12:29.898 }' 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.898 17:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.467 [2024-11-20 17:46:57.430210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:30.467 [2024-11-20 17:46:57.430516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:30.467 [2024-11-20 17:46:57.430537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.467 [2024-11-20 17:46:57.430846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:30.467 [2024-11-20 17:46:57.431038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:30.467 [2024-11-20 17:46:57.431058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:30.467 BaseBdev4 00:12:30.467 [2024-11-20 17:46:57.431217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.467 [ 00:12:30.467 { 00:12:30.467 "name": "BaseBdev4", 00:12:30.467 "aliases": [ 00:12:30.467 "11614105-2c16-4509-aa84-c2d9a36a1bde" 00:12:30.467 ], 00:12:30.467 "product_name": "Malloc disk", 00:12:30.467 "block_size": 512, 00:12:30.467 "num_blocks": 65536, 00:12:30.467 "uuid": "11614105-2c16-4509-aa84-c2d9a36a1bde", 00:12:30.467 "assigned_rate_limits": { 00:12:30.467 "rw_ios_per_sec": 0, 00:12:30.467 "rw_mbytes_per_sec": 0, 00:12:30.467 "r_mbytes_per_sec": 0, 00:12:30.467 "w_mbytes_per_sec": 0 00:12:30.467 }, 00:12:30.467 "claimed": true, 00:12:30.467 "claim_type": "exclusive_write", 00:12:30.467 "zoned": false, 00:12:30.467 "supported_io_types": { 00:12:30.467 "read": true, 00:12:30.467 "write": true, 00:12:30.467 "unmap": true, 00:12:30.467 "flush": true, 00:12:30.467 "reset": true, 00:12:30.467 "nvme_admin": false, 00:12:30.467 "nvme_io": false, 00:12:30.467 "nvme_io_md": false, 00:12:30.467 "write_zeroes": true, 00:12:30.467 "zcopy": true, 00:12:30.467 "get_zone_info": false, 00:12:30.467 "zone_management": false, 00:12:30.467 "zone_append": false, 00:12:30.467 "compare": false, 00:12:30.467 "compare_and_write": false, 00:12:30.467 "abort": true, 00:12:30.467 "seek_hole": false, 00:12:30.467 "seek_data": false, 00:12:30.467 "copy": true, 00:12:30.467 "nvme_iov_md": false 00:12:30.467 }, 00:12:30.467 "memory_domains": [ 00:12:30.467 { 00:12:30.467 "dma_device_id": "system", 00:12:30.467 "dma_device_type": 1 00:12:30.467 }, 00:12:30.467 { 00:12:30.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.467 "dma_device_type": 2 00:12:30.467 } 00:12:30.467 ], 00:12:30.467 "driver_specific": {} 00:12:30.467 } 00:12:30.467 ] 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.467 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.467 "name": "Existed_Raid", 00:12:30.468 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:30.468 "strip_size_kb": 0, 00:12:30.468 "state": "online", 00:12:30.468 "raid_level": "raid1", 00:12:30.468 "superblock": true, 00:12:30.468 "num_base_bdevs": 4, 00:12:30.468 "num_base_bdevs_discovered": 4, 00:12:30.468 "num_base_bdevs_operational": 4, 00:12:30.468 "base_bdevs_list": [ 00:12:30.468 { 00:12:30.468 "name": "BaseBdev1", 00:12:30.468 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:30.468 "is_configured": true, 00:12:30.468 "data_offset": 2048, 00:12:30.468 "data_size": 63488 00:12:30.468 }, 00:12:30.468 { 00:12:30.468 "name": "BaseBdev2", 00:12:30.468 "uuid": "4b45aef5-68d0-4377-a9ec-0b7071202f82", 00:12:30.468 "is_configured": true, 00:12:30.468 "data_offset": 2048, 00:12:30.468 "data_size": 63488 00:12:30.468 }, 00:12:30.468 { 00:12:30.468 "name": "BaseBdev3", 00:12:30.468 "uuid": "5e4bcb1f-bffc-4d7d-92e5-e7e49e313207", 00:12:30.468 "is_configured": true, 00:12:30.468 "data_offset": 2048, 00:12:30.468 "data_size": 63488 00:12:30.468 }, 00:12:30.468 { 00:12:30.468 "name": "BaseBdev4", 00:12:30.468 "uuid": "11614105-2c16-4509-aa84-c2d9a36a1bde", 00:12:30.468 "is_configured": true, 00:12:30.468 "data_offset": 2048, 00:12:30.468 "data_size": 63488 00:12:30.468 } 00:12:30.468 ] 00:12:30.468 }' 00:12:30.468 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.468 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.727 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.728 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.728 [2024-11-20 17:46:57.901906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.987 17:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.987 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.987 "name": "Existed_Raid", 00:12:30.987 "aliases": [ 00:12:30.987 "db8464cf-c142-4faa-b1b1-acf15c83014c" 00:12:30.987 ], 00:12:30.987 "product_name": "Raid Volume", 00:12:30.987 "block_size": 512, 00:12:30.987 "num_blocks": 63488, 00:12:30.987 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:30.987 "assigned_rate_limits": { 00:12:30.987 "rw_ios_per_sec": 0, 00:12:30.987 "rw_mbytes_per_sec": 0, 00:12:30.987 "r_mbytes_per_sec": 0, 00:12:30.987 "w_mbytes_per_sec": 0 00:12:30.987 }, 00:12:30.988 "claimed": false, 00:12:30.988 "zoned": false, 00:12:30.988 "supported_io_types": { 00:12:30.988 "read": true, 00:12:30.988 "write": true, 00:12:30.988 "unmap": false, 00:12:30.988 "flush": false, 00:12:30.988 "reset": true, 00:12:30.988 "nvme_admin": false, 00:12:30.988 "nvme_io": false, 00:12:30.988 "nvme_io_md": false, 00:12:30.988 "write_zeroes": true, 00:12:30.988 "zcopy": false, 00:12:30.988 "get_zone_info": false, 00:12:30.988 "zone_management": false, 00:12:30.988 "zone_append": false, 00:12:30.988 "compare": false, 00:12:30.988 "compare_and_write": false, 00:12:30.988 "abort": false, 00:12:30.988 "seek_hole": false, 00:12:30.988 "seek_data": false, 00:12:30.988 "copy": false, 00:12:30.988 "nvme_iov_md": false 00:12:30.988 }, 00:12:30.988 "memory_domains": [ 00:12:30.988 { 00:12:30.988 "dma_device_id": "system", 00:12:30.988 "dma_device_type": 1 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.988 "dma_device_type": 2 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "system", 00:12:30.988 "dma_device_type": 1 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.988 "dma_device_type": 2 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "system", 00:12:30.988 "dma_device_type": 1 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.988 "dma_device_type": 2 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "system", 00:12:30.988 "dma_device_type": 1 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.988 "dma_device_type": 2 00:12:30.988 } 00:12:30.988 ], 00:12:30.988 "driver_specific": { 00:12:30.988 "raid": { 00:12:30.988 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:30.988 "strip_size_kb": 0, 00:12:30.988 "state": "online", 00:12:30.988 "raid_level": "raid1", 00:12:30.988 "superblock": true, 00:12:30.988 "num_base_bdevs": 4, 00:12:30.988 "num_base_bdevs_discovered": 4, 00:12:30.988 "num_base_bdevs_operational": 4, 00:12:30.988 "base_bdevs_list": [ 00:12:30.988 { 00:12:30.988 "name": "BaseBdev1", 00:12:30.988 "uuid": "32551e8c-bab3-4de4-8d98-00271aa5cd8c", 00:12:30.988 "is_configured": true, 00:12:30.988 "data_offset": 2048, 00:12:30.988 "data_size": 63488 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "name": "BaseBdev2", 00:12:30.988 "uuid": "4b45aef5-68d0-4377-a9ec-0b7071202f82", 00:12:30.988 "is_configured": true, 00:12:30.988 "data_offset": 2048, 00:12:30.988 "data_size": 63488 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "name": "BaseBdev3", 00:12:30.988 "uuid": "5e4bcb1f-bffc-4d7d-92e5-e7e49e313207", 00:12:30.988 "is_configured": true, 00:12:30.988 "data_offset": 2048, 00:12:30.988 "data_size": 63488 00:12:30.988 }, 00:12:30.988 { 00:12:30.988 "name": "BaseBdev4", 00:12:30.988 "uuid": "11614105-2c16-4509-aa84-c2d9a36a1bde", 00:12:30.988 "is_configured": true, 00:12:30.988 "data_offset": 2048, 00:12:30.988 "data_size": 63488 00:12:30.988 } 00:12:30.988 ] 00:12:30.988 } 00:12:30.988 } 00:12:30.988 }' 00:12:30.988 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.988 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:30.988 BaseBdev2 00:12:30.988 BaseBdev3 00:12:30.988 BaseBdev4' 00:12:30.988 17:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.988 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.248 [2024-11-20 17:46:58.261035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.248 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.508 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.508 "name": "Existed_Raid", 00:12:31.508 "uuid": "db8464cf-c142-4faa-b1b1-acf15c83014c", 00:12:31.508 "strip_size_kb": 0, 00:12:31.508 "state": "online", 00:12:31.508 "raid_level": "raid1", 00:12:31.508 "superblock": true, 00:12:31.508 "num_base_bdevs": 4, 00:12:31.508 "num_base_bdevs_discovered": 3, 00:12:31.508 "num_base_bdevs_operational": 3, 00:12:31.508 "base_bdevs_list": [ 00:12:31.508 { 00:12:31.508 "name": null, 00:12:31.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.508 "is_configured": false, 00:12:31.508 "data_offset": 0, 00:12:31.508 "data_size": 63488 00:12:31.508 }, 00:12:31.508 { 00:12:31.508 "name": "BaseBdev2", 00:12:31.508 "uuid": "4b45aef5-68d0-4377-a9ec-0b7071202f82", 00:12:31.508 "is_configured": true, 00:12:31.508 "data_offset": 2048, 00:12:31.508 "data_size": 63488 00:12:31.508 }, 00:12:31.508 { 00:12:31.508 "name": "BaseBdev3", 00:12:31.508 "uuid": "5e4bcb1f-bffc-4d7d-92e5-e7e49e313207", 00:12:31.508 "is_configured": true, 00:12:31.508 "data_offset": 2048, 00:12:31.508 "data_size": 63488 00:12:31.508 }, 00:12:31.508 { 00:12:31.508 "name": "BaseBdev4", 00:12:31.508 "uuid": "11614105-2c16-4509-aa84-c2d9a36a1bde", 00:12:31.508 "is_configured": true, 00:12:31.508 "data_offset": 2048, 00:12:31.508 "data_size": 63488 00:12:31.508 } 00:12:31.508 ] 00:12:31.509 }' 00:12:31.509 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.509 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.768 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:31.768 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:31.768 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.768 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.768 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.769 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.769 [2024-11-20 17:46:58.891456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:32.029 17:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.029 17:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.029 [2024-11-20 17:46:59.059446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.029 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 [2024-11-20 17:46:59.223896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:32.288 [2024-11-20 17:46:59.224057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.288 [2024-11-20 17:46:59.329769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.288 [2024-11-20 17:46:59.329840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.288 [2024-11-20 17:46:59.329855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 BaseBdev2 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.288 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 [ 00:12:32.288 { 00:12:32.288 "name": "BaseBdev2", 00:12:32.288 "aliases": [ 00:12:32.288 "12188e09-142a-49b6-a58f-ec96ef787d64" 00:12:32.288 ], 00:12:32.288 "product_name": "Malloc disk", 00:12:32.288 "block_size": 512, 00:12:32.288 "num_blocks": 65536, 00:12:32.288 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:32.288 "assigned_rate_limits": { 00:12:32.288 "rw_ios_per_sec": 0, 00:12:32.288 "rw_mbytes_per_sec": 0, 00:12:32.288 "r_mbytes_per_sec": 0, 00:12:32.288 "w_mbytes_per_sec": 0 00:12:32.288 }, 00:12:32.288 "claimed": false, 00:12:32.288 "zoned": false, 00:12:32.288 "supported_io_types": { 00:12:32.288 "read": true, 00:12:32.288 "write": true, 00:12:32.288 "unmap": true, 00:12:32.288 "flush": true, 00:12:32.288 "reset": true, 00:12:32.288 "nvme_admin": false, 00:12:32.288 "nvme_io": false, 00:12:32.288 "nvme_io_md": false, 00:12:32.288 "write_zeroes": true, 00:12:32.288 "zcopy": true, 00:12:32.288 "get_zone_info": false, 00:12:32.288 "zone_management": false, 00:12:32.288 "zone_append": false, 00:12:32.288 "compare": false, 00:12:32.547 "compare_and_write": false, 00:12:32.547 "abort": true, 00:12:32.547 "seek_hole": false, 00:12:32.547 "seek_data": false, 00:12:32.547 "copy": true, 00:12:32.547 "nvme_iov_md": false 00:12:32.547 }, 00:12:32.547 "memory_domains": [ 00:12:32.547 { 00:12:32.547 "dma_device_id": "system", 00:12:32.547 "dma_device_type": 1 00:12:32.547 }, 00:12:32.547 { 00:12:32.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.548 "dma_device_type": 2 00:12:32.548 } 00:12:32.548 ], 00:12:32.548 "driver_specific": {} 00:12:32.548 } 00:12:32.548 ] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 BaseBdev3 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 [ 00:12:32.548 { 00:12:32.548 "name": "BaseBdev3", 00:12:32.548 "aliases": [ 00:12:32.548 "40634ca5-e14c-4615-839a-64d20825b53e" 00:12:32.548 ], 00:12:32.548 "product_name": "Malloc disk", 00:12:32.548 "block_size": 512, 00:12:32.548 "num_blocks": 65536, 00:12:32.548 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:32.548 "assigned_rate_limits": { 00:12:32.548 "rw_ios_per_sec": 0, 00:12:32.548 "rw_mbytes_per_sec": 0, 00:12:32.548 "r_mbytes_per_sec": 0, 00:12:32.548 "w_mbytes_per_sec": 0 00:12:32.548 }, 00:12:32.548 "claimed": false, 00:12:32.548 "zoned": false, 00:12:32.548 "supported_io_types": { 00:12:32.548 "read": true, 00:12:32.548 "write": true, 00:12:32.548 "unmap": true, 00:12:32.548 "flush": true, 00:12:32.548 "reset": true, 00:12:32.548 "nvme_admin": false, 00:12:32.548 "nvme_io": false, 00:12:32.548 "nvme_io_md": false, 00:12:32.548 "write_zeroes": true, 00:12:32.548 "zcopy": true, 00:12:32.548 "get_zone_info": false, 00:12:32.548 "zone_management": false, 00:12:32.548 "zone_append": false, 00:12:32.548 "compare": false, 00:12:32.548 "compare_and_write": false, 00:12:32.548 "abort": true, 00:12:32.548 "seek_hole": false, 00:12:32.548 "seek_data": false, 00:12:32.548 "copy": true, 00:12:32.548 "nvme_iov_md": false 00:12:32.548 }, 00:12:32.548 "memory_domains": [ 00:12:32.548 { 00:12:32.548 "dma_device_id": "system", 00:12:32.548 "dma_device_type": 1 00:12:32.548 }, 00:12:32.548 { 00:12:32.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.548 "dma_device_type": 2 00:12:32.548 } 00:12:32.548 ], 00:12:32.548 "driver_specific": {} 00:12:32.548 } 00:12:32.548 ] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 BaseBdev4 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 [ 00:12:32.548 { 00:12:32.548 "name": "BaseBdev4", 00:12:32.548 "aliases": [ 00:12:32.548 "7e156d1d-80b9-4984-9267-17eec2263692" 00:12:32.548 ], 00:12:32.548 "product_name": "Malloc disk", 00:12:32.548 "block_size": 512, 00:12:32.548 "num_blocks": 65536, 00:12:32.548 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:32.548 "assigned_rate_limits": { 00:12:32.548 "rw_ios_per_sec": 0, 00:12:32.548 "rw_mbytes_per_sec": 0, 00:12:32.548 "r_mbytes_per_sec": 0, 00:12:32.548 "w_mbytes_per_sec": 0 00:12:32.548 }, 00:12:32.548 "claimed": false, 00:12:32.548 "zoned": false, 00:12:32.548 "supported_io_types": { 00:12:32.548 "read": true, 00:12:32.548 "write": true, 00:12:32.548 "unmap": true, 00:12:32.548 "flush": true, 00:12:32.548 "reset": true, 00:12:32.548 "nvme_admin": false, 00:12:32.548 "nvme_io": false, 00:12:32.548 "nvme_io_md": false, 00:12:32.548 "write_zeroes": true, 00:12:32.548 "zcopy": true, 00:12:32.548 "get_zone_info": false, 00:12:32.548 "zone_management": false, 00:12:32.548 "zone_append": false, 00:12:32.548 "compare": false, 00:12:32.548 "compare_and_write": false, 00:12:32.548 "abort": true, 00:12:32.548 "seek_hole": false, 00:12:32.548 "seek_data": false, 00:12:32.548 "copy": true, 00:12:32.548 "nvme_iov_md": false 00:12:32.548 }, 00:12:32.548 "memory_domains": [ 00:12:32.548 { 00:12:32.548 "dma_device_id": "system", 00:12:32.548 "dma_device_type": 1 00:12:32.548 }, 00:12:32.548 { 00:12:32.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.548 "dma_device_type": 2 00:12:32.548 } 00:12:32.548 ], 00:12:32.548 "driver_specific": {} 00:12:32.548 } 00:12:32.548 ] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.548 [2024-11-20 17:46:59.646207] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.548 [2024-11-20 17:46:59.646268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.548 [2024-11-20 17:46:59.646289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.548 [2024-11-20 17:46:59.648394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.548 [2024-11-20 17:46:59.648444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.548 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.549 "name": "Existed_Raid", 00:12:32.549 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:32.549 "strip_size_kb": 0, 00:12:32.549 "state": "configuring", 00:12:32.549 "raid_level": "raid1", 00:12:32.549 "superblock": true, 00:12:32.549 "num_base_bdevs": 4, 00:12:32.549 "num_base_bdevs_discovered": 3, 00:12:32.549 "num_base_bdevs_operational": 4, 00:12:32.549 "base_bdevs_list": [ 00:12:32.549 { 00:12:32.549 "name": "BaseBdev1", 00:12:32.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.549 "is_configured": false, 00:12:32.549 "data_offset": 0, 00:12:32.549 "data_size": 0 00:12:32.549 }, 00:12:32.549 { 00:12:32.549 "name": "BaseBdev2", 00:12:32.549 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:32.549 "is_configured": true, 00:12:32.549 "data_offset": 2048, 00:12:32.549 "data_size": 63488 00:12:32.549 }, 00:12:32.549 { 00:12:32.549 "name": "BaseBdev3", 00:12:32.549 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:32.549 "is_configured": true, 00:12:32.549 "data_offset": 2048, 00:12:32.549 "data_size": 63488 00:12:32.549 }, 00:12:32.549 { 00:12:32.549 "name": "BaseBdev4", 00:12:32.549 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:32.549 "is_configured": true, 00:12:32.549 "data_offset": 2048, 00:12:32.549 "data_size": 63488 00:12:32.549 } 00:12:32.549 ] 00:12:32.549 }' 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.549 17:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.117 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:33.117 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.117 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.117 [2024-11-20 17:47:00.093538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.117 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.118 "name": "Existed_Raid", 00:12:33.118 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:33.118 "strip_size_kb": 0, 00:12:33.118 "state": "configuring", 00:12:33.118 "raid_level": "raid1", 00:12:33.118 "superblock": true, 00:12:33.118 "num_base_bdevs": 4, 00:12:33.118 "num_base_bdevs_discovered": 2, 00:12:33.118 "num_base_bdevs_operational": 4, 00:12:33.118 "base_bdevs_list": [ 00:12:33.118 { 00:12:33.118 "name": "BaseBdev1", 00:12:33.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.118 "is_configured": false, 00:12:33.118 "data_offset": 0, 00:12:33.118 "data_size": 0 00:12:33.118 }, 00:12:33.118 { 00:12:33.118 "name": null, 00:12:33.118 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:33.118 "is_configured": false, 00:12:33.118 "data_offset": 0, 00:12:33.118 "data_size": 63488 00:12:33.118 }, 00:12:33.118 { 00:12:33.118 "name": "BaseBdev3", 00:12:33.118 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:33.118 "is_configured": true, 00:12:33.118 "data_offset": 2048, 00:12:33.118 "data_size": 63488 00:12:33.118 }, 00:12:33.118 { 00:12:33.118 "name": "BaseBdev4", 00:12:33.118 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:33.118 "is_configured": true, 00:12:33.118 "data_offset": 2048, 00:12:33.118 "data_size": 63488 00:12:33.118 } 00:12:33.118 ] 00:12:33.118 }' 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.118 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.378 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.378 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.378 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.378 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:33.378 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.638 [2024-11-20 17:47:00.603675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.638 BaseBdev1 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.638 [ 00:12:33.638 { 00:12:33.638 "name": "BaseBdev1", 00:12:33.638 "aliases": [ 00:12:33.638 "94f5a1e1-56e9-4182-bcf5-08098bec0e02" 00:12:33.638 ], 00:12:33.638 "product_name": "Malloc disk", 00:12:33.638 "block_size": 512, 00:12:33.638 "num_blocks": 65536, 00:12:33.638 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:33.638 "assigned_rate_limits": { 00:12:33.638 "rw_ios_per_sec": 0, 00:12:33.638 "rw_mbytes_per_sec": 0, 00:12:33.638 "r_mbytes_per_sec": 0, 00:12:33.638 "w_mbytes_per_sec": 0 00:12:33.638 }, 00:12:33.638 "claimed": true, 00:12:33.638 "claim_type": "exclusive_write", 00:12:33.638 "zoned": false, 00:12:33.638 "supported_io_types": { 00:12:33.638 "read": true, 00:12:33.638 "write": true, 00:12:33.638 "unmap": true, 00:12:33.638 "flush": true, 00:12:33.638 "reset": true, 00:12:33.638 "nvme_admin": false, 00:12:33.638 "nvme_io": false, 00:12:33.638 "nvme_io_md": false, 00:12:33.638 "write_zeroes": true, 00:12:33.638 "zcopy": true, 00:12:33.638 "get_zone_info": false, 00:12:33.638 "zone_management": false, 00:12:33.638 "zone_append": false, 00:12:33.638 "compare": false, 00:12:33.638 "compare_and_write": false, 00:12:33.638 "abort": true, 00:12:33.638 "seek_hole": false, 00:12:33.638 "seek_data": false, 00:12:33.638 "copy": true, 00:12:33.638 "nvme_iov_md": false 00:12:33.638 }, 00:12:33.638 "memory_domains": [ 00:12:33.638 { 00:12:33.638 "dma_device_id": "system", 00:12:33.638 "dma_device_type": 1 00:12:33.638 }, 00:12:33.638 { 00:12:33.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.638 "dma_device_type": 2 00:12:33.638 } 00:12:33.638 ], 00:12:33.638 "driver_specific": {} 00:12:33.638 } 00:12:33.638 ] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.638 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.639 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.639 "name": "Existed_Raid", 00:12:33.639 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:33.639 "strip_size_kb": 0, 00:12:33.639 "state": "configuring", 00:12:33.639 "raid_level": "raid1", 00:12:33.639 "superblock": true, 00:12:33.639 "num_base_bdevs": 4, 00:12:33.639 "num_base_bdevs_discovered": 3, 00:12:33.639 "num_base_bdevs_operational": 4, 00:12:33.639 "base_bdevs_list": [ 00:12:33.639 { 00:12:33.639 "name": "BaseBdev1", 00:12:33.639 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:33.639 "is_configured": true, 00:12:33.639 "data_offset": 2048, 00:12:33.639 "data_size": 63488 00:12:33.639 }, 00:12:33.639 { 00:12:33.639 "name": null, 00:12:33.639 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:33.639 "is_configured": false, 00:12:33.639 "data_offset": 0, 00:12:33.639 "data_size": 63488 00:12:33.639 }, 00:12:33.639 { 00:12:33.639 "name": "BaseBdev3", 00:12:33.639 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:33.639 "is_configured": true, 00:12:33.639 "data_offset": 2048, 00:12:33.639 "data_size": 63488 00:12:33.639 }, 00:12:33.639 { 00:12:33.639 "name": "BaseBdev4", 00:12:33.639 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:33.639 "is_configured": true, 00:12:33.639 "data_offset": 2048, 00:12:33.639 "data_size": 63488 00:12:33.639 } 00:12:33.639 ] 00:12:33.639 }' 00:12:33.639 17:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.639 17:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.208 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.208 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.208 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.208 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.208 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.208 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.209 [2024-11-20 17:47:01.158857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.209 "name": "Existed_Raid", 00:12:34.209 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:34.209 "strip_size_kb": 0, 00:12:34.209 "state": "configuring", 00:12:34.209 "raid_level": "raid1", 00:12:34.209 "superblock": true, 00:12:34.209 "num_base_bdevs": 4, 00:12:34.209 "num_base_bdevs_discovered": 2, 00:12:34.209 "num_base_bdevs_operational": 4, 00:12:34.209 "base_bdevs_list": [ 00:12:34.209 { 00:12:34.209 "name": "BaseBdev1", 00:12:34.209 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:34.209 "is_configured": true, 00:12:34.209 "data_offset": 2048, 00:12:34.209 "data_size": 63488 00:12:34.209 }, 00:12:34.209 { 00:12:34.209 "name": null, 00:12:34.209 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:34.209 "is_configured": false, 00:12:34.209 "data_offset": 0, 00:12:34.209 "data_size": 63488 00:12:34.209 }, 00:12:34.209 { 00:12:34.209 "name": null, 00:12:34.209 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:34.209 "is_configured": false, 00:12:34.209 "data_offset": 0, 00:12:34.209 "data_size": 63488 00:12:34.209 }, 00:12:34.209 { 00:12:34.209 "name": "BaseBdev4", 00:12:34.209 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:34.209 "is_configured": true, 00:12:34.209 "data_offset": 2048, 00:12:34.209 "data_size": 63488 00:12:34.209 } 00:12:34.209 ] 00:12:34.209 }' 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.209 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:34.476 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.476 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.476 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.745 [2024-11-20 17:47:01.693871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.745 "name": "Existed_Raid", 00:12:34.745 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:34.745 "strip_size_kb": 0, 00:12:34.745 "state": "configuring", 00:12:34.745 "raid_level": "raid1", 00:12:34.745 "superblock": true, 00:12:34.745 "num_base_bdevs": 4, 00:12:34.745 "num_base_bdevs_discovered": 3, 00:12:34.745 "num_base_bdevs_operational": 4, 00:12:34.745 "base_bdevs_list": [ 00:12:34.745 { 00:12:34.745 "name": "BaseBdev1", 00:12:34.745 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:34.745 "is_configured": true, 00:12:34.745 "data_offset": 2048, 00:12:34.745 "data_size": 63488 00:12:34.745 }, 00:12:34.745 { 00:12:34.745 "name": null, 00:12:34.745 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:34.745 "is_configured": false, 00:12:34.745 "data_offset": 0, 00:12:34.745 "data_size": 63488 00:12:34.745 }, 00:12:34.745 { 00:12:34.745 "name": "BaseBdev3", 00:12:34.745 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:34.745 "is_configured": true, 00:12:34.745 "data_offset": 2048, 00:12:34.745 "data_size": 63488 00:12:34.745 }, 00:12:34.745 { 00:12:34.745 "name": "BaseBdev4", 00:12:34.745 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:34.745 "is_configured": true, 00:12:34.745 "data_offset": 2048, 00:12:34.745 "data_size": 63488 00:12:34.745 } 00:12:34.745 ] 00:12:34.745 }' 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.745 17:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.312 [2024-11-20 17:47:02.261068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.312 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.312 "name": "Existed_Raid", 00:12:35.312 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:35.312 "strip_size_kb": 0, 00:12:35.312 "state": "configuring", 00:12:35.312 "raid_level": "raid1", 00:12:35.312 "superblock": true, 00:12:35.312 "num_base_bdevs": 4, 00:12:35.313 "num_base_bdevs_discovered": 2, 00:12:35.313 "num_base_bdevs_operational": 4, 00:12:35.313 "base_bdevs_list": [ 00:12:35.313 { 00:12:35.313 "name": null, 00:12:35.313 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:35.313 "is_configured": false, 00:12:35.313 "data_offset": 0, 00:12:35.313 "data_size": 63488 00:12:35.313 }, 00:12:35.313 { 00:12:35.313 "name": null, 00:12:35.313 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:35.313 "is_configured": false, 00:12:35.313 "data_offset": 0, 00:12:35.313 "data_size": 63488 00:12:35.313 }, 00:12:35.313 { 00:12:35.313 "name": "BaseBdev3", 00:12:35.313 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:35.313 "is_configured": true, 00:12:35.313 "data_offset": 2048, 00:12:35.313 "data_size": 63488 00:12:35.313 }, 00:12:35.313 { 00:12:35.313 "name": "BaseBdev4", 00:12:35.313 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:35.313 "is_configured": true, 00:12:35.313 "data_offset": 2048, 00:12:35.313 "data_size": 63488 00:12:35.313 } 00:12:35.313 ] 00:12:35.313 }' 00:12:35.313 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.313 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.882 [2024-11-20 17:47:02.859834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.882 "name": "Existed_Raid", 00:12:35.882 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:35.882 "strip_size_kb": 0, 00:12:35.882 "state": "configuring", 00:12:35.882 "raid_level": "raid1", 00:12:35.882 "superblock": true, 00:12:35.882 "num_base_bdevs": 4, 00:12:35.882 "num_base_bdevs_discovered": 3, 00:12:35.882 "num_base_bdevs_operational": 4, 00:12:35.882 "base_bdevs_list": [ 00:12:35.882 { 00:12:35.882 "name": null, 00:12:35.882 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:35.882 "is_configured": false, 00:12:35.882 "data_offset": 0, 00:12:35.882 "data_size": 63488 00:12:35.882 }, 00:12:35.882 { 00:12:35.882 "name": "BaseBdev2", 00:12:35.882 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:35.882 "is_configured": true, 00:12:35.882 "data_offset": 2048, 00:12:35.882 "data_size": 63488 00:12:35.882 }, 00:12:35.882 { 00:12:35.882 "name": "BaseBdev3", 00:12:35.882 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:35.882 "is_configured": true, 00:12:35.882 "data_offset": 2048, 00:12:35.882 "data_size": 63488 00:12:35.882 }, 00:12:35.882 { 00:12:35.882 "name": "BaseBdev4", 00:12:35.882 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:35.882 "is_configured": true, 00:12:35.882 "data_offset": 2048, 00:12:35.882 "data_size": 63488 00:12:35.882 } 00:12:35.882 ] 00:12:35.882 }' 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.882 17:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.452 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.452 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 94f5a1e1-56e9-4182-bcf5-08098bec0e02 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 [2024-11-20 17:47:03.467999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:36.453 [2024-11-20 17:47:03.468332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:36.453 [2024-11-20 17:47:03.468355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.453 [2024-11-20 17:47:03.468659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:36.453 NewBaseBdev 00:12:36.453 [2024-11-20 17:47:03.468863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:36.453 [2024-11-20 17:47:03.468879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:36.453 [2024-11-20 17:47:03.469047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 [ 00:12:36.453 { 00:12:36.453 "name": "NewBaseBdev", 00:12:36.453 "aliases": [ 00:12:36.453 "94f5a1e1-56e9-4182-bcf5-08098bec0e02" 00:12:36.453 ], 00:12:36.453 "product_name": "Malloc disk", 00:12:36.453 "block_size": 512, 00:12:36.453 "num_blocks": 65536, 00:12:36.453 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:36.453 "assigned_rate_limits": { 00:12:36.453 "rw_ios_per_sec": 0, 00:12:36.453 "rw_mbytes_per_sec": 0, 00:12:36.453 "r_mbytes_per_sec": 0, 00:12:36.453 "w_mbytes_per_sec": 0 00:12:36.453 }, 00:12:36.453 "claimed": true, 00:12:36.453 "claim_type": "exclusive_write", 00:12:36.453 "zoned": false, 00:12:36.453 "supported_io_types": { 00:12:36.453 "read": true, 00:12:36.453 "write": true, 00:12:36.453 "unmap": true, 00:12:36.453 "flush": true, 00:12:36.453 "reset": true, 00:12:36.453 "nvme_admin": false, 00:12:36.453 "nvme_io": false, 00:12:36.453 "nvme_io_md": false, 00:12:36.453 "write_zeroes": true, 00:12:36.453 "zcopy": true, 00:12:36.453 "get_zone_info": false, 00:12:36.453 "zone_management": false, 00:12:36.453 "zone_append": false, 00:12:36.453 "compare": false, 00:12:36.453 "compare_and_write": false, 00:12:36.453 "abort": true, 00:12:36.453 "seek_hole": false, 00:12:36.453 "seek_data": false, 00:12:36.453 "copy": true, 00:12:36.453 "nvme_iov_md": false 00:12:36.453 }, 00:12:36.453 "memory_domains": [ 00:12:36.453 { 00:12:36.453 "dma_device_id": "system", 00:12:36.453 "dma_device_type": 1 00:12:36.453 }, 00:12:36.453 { 00:12:36.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.453 "dma_device_type": 2 00:12:36.453 } 00:12:36.453 ], 00:12:36.453 "driver_specific": {} 00:12:36.453 } 00:12:36.453 ] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.453 "name": "Existed_Raid", 00:12:36.453 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:36.453 "strip_size_kb": 0, 00:12:36.453 "state": "online", 00:12:36.453 "raid_level": "raid1", 00:12:36.453 "superblock": true, 00:12:36.453 "num_base_bdevs": 4, 00:12:36.453 "num_base_bdevs_discovered": 4, 00:12:36.453 "num_base_bdevs_operational": 4, 00:12:36.453 "base_bdevs_list": [ 00:12:36.453 { 00:12:36.453 "name": "NewBaseBdev", 00:12:36.453 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:36.453 "is_configured": true, 00:12:36.453 "data_offset": 2048, 00:12:36.453 "data_size": 63488 00:12:36.453 }, 00:12:36.453 { 00:12:36.453 "name": "BaseBdev2", 00:12:36.453 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:36.453 "is_configured": true, 00:12:36.453 "data_offset": 2048, 00:12:36.453 "data_size": 63488 00:12:36.453 }, 00:12:36.453 { 00:12:36.453 "name": "BaseBdev3", 00:12:36.453 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:36.453 "is_configured": true, 00:12:36.453 "data_offset": 2048, 00:12:36.453 "data_size": 63488 00:12:36.453 }, 00:12:36.453 { 00:12:36.453 "name": "BaseBdev4", 00:12:36.453 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:36.453 "is_configured": true, 00:12:36.453 "data_offset": 2048, 00:12:36.453 "data_size": 63488 00:12:36.453 } 00:12:36.453 ] 00:12:36.453 }' 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.453 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.022 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.022 [2024-11-20 17:47:03.955578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.023 17:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.023 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.023 "name": "Existed_Raid", 00:12:37.023 "aliases": [ 00:12:37.023 "07d3173e-9854-492d-a43c-64a13dbf8bc8" 00:12:37.023 ], 00:12:37.023 "product_name": "Raid Volume", 00:12:37.023 "block_size": 512, 00:12:37.023 "num_blocks": 63488, 00:12:37.023 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:37.023 "assigned_rate_limits": { 00:12:37.023 "rw_ios_per_sec": 0, 00:12:37.023 "rw_mbytes_per_sec": 0, 00:12:37.023 "r_mbytes_per_sec": 0, 00:12:37.023 "w_mbytes_per_sec": 0 00:12:37.023 }, 00:12:37.023 "claimed": false, 00:12:37.023 "zoned": false, 00:12:37.023 "supported_io_types": { 00:12:37.023 "read": true, 00:12:37.023 "write": true, 00:12:37.023 "unmap": false, 00:12:37.023 "flush": false, 00:12:37.023 "reset": true, 00:12:37.023 "nvme_admin": false, 00:12:37.023 "nvme_io": false, 00:12:37.023 "nvme_io_md": false, 00:12:37.023 "write_zeroes": true, 00:12:37.023 "zcopy": false, 00:12:37.023 "get_zone_info": false, 00:12:37.023 "zone_management": false, 00:12:37.023 "zone_append": false, 00:12:37.023 "compare": false, 00:12:37.023 "compare_and_write": false, 00:12:37.023 "abort": false, 00:12:37.023 "seek_hole": false, 00:12:37.023 "seek_data": false, 00:12:37.023 "copy": false, 00:12:37.023 "nvme_iov_md": false 00:12:37.023 }, 00:12:37.023 "memory_domains": [ 00:12:37.023 { 00:12:37.023 "dma_device_id": "system", 00:12:37.023 "dma_device_type": 1 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.023 "dma_device_type": 2 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "system", 00:12:37.023 "dma_device_type": 1 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.023 "dma_device_type": 2 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "system", 00:12:37.023 "dma_device_type": 1 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.023 "dma_device_type": 2 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "system", 00:12:37.023 "dma_device_type": 1 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.023 "dma_device_type": 2 00:12:37.023 } 00:12:37.023 ], 00:12:37.023 "driver_specific": { 00:12:37.023 "raid": { 00:12:37.023 "uuid": "07d3173e-9854-492d-a43c-64a13dbf8bc8", 00:12:37.023 "strip_size_kb": 0, 00:12:37.023 "state": "online", 00:12:37.023 "raid_level": "raid1", 00:12:37.023 "superblock": true, 00:12:37.023 "num_base_bdevs": 4, 00:12:37.023 "num_base_bdevs_discovered": 4, 00:12:37.023 "num_base_bdevs_operational": 4, 00:12:37.023 "base_bdevs_list": [ 00:12:37.023 { 00:12:37.023 "name": "NewBaseBdev", 00:12:37.023 "uuid": "94f5a1e1-56e9-4182-bcf5-08098bec0e02", 00:12:37.023 "is_configured": true, 00:12:37.023 "data_offset": 2048, 00:12:37.023 "data_size": 63488 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "name": "BaseBdev2", 00:12:37.023 "uuid": "12188e09-142a-49b6-a58f-ec96ef787d64", 00:12:37.023 "is_configured": true, 00:12:37.023 "data_offset": 2048, 00:12:37.023 "data_size": 63488 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "name": "BaseBdev3", 00:12:37.023 "uuid": "40634ca5-e14c-4615-839a-64d20825b53e", 00:12:37.023 "is_configured": true, 00:12:37.023 "data_offset": 2048, 00:12:37.023 "data_size": 63488 00:12:37.023 }, 00:12:37.023 { 00:12:37.023 "name": "BaseBdev4", 00:12:37.023 "uuid": "7e156d1d-80b9-4984-9267-17eec2263692", 00:12:37.023 "is_configured": true, 00:12:37.023 "data_offset": 2048, 00:12:37.023 "data_size": 63488 00:12:37.023 } 00:12:37.023 ] 00:12:37.023 } 00:12:37.023 } 00:12:37.023 }' 00:12:37.023 17:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:37.023 BaseBdev2 00:12:37.023 BaseBdev3 00:12:37.023 BaseBdev4' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.023 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.024 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.283 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.283 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.284 [2024-11-20 17:47:04.238754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:37.284 [2024-11-20 17:47:04.238801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.284 [2024-11-20 17:47:04.238909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.284 [2024-11-20 17:47:04.239261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.284 [2024-11-20 17:47:04.239284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74296 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74296 ']' 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74296 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74296 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.284 killing process with pid 74296 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74296' 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74296 00:12:37.284 [2024-11-20 17:47:04.286198] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.284 17:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74296 00:12:37.853 [2024-11-20 17:47:04.724489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.854 17:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:38.854 00:12:38.854 real 0m12.072s 00:12:38.854 user 0m18.882s 00:12:38.854 sys 0m2.229s 00:12:38.854 17:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.854 17:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.854 ************************************ 00:12:38.854 END TEST raid_state_function_test_sb 00:12:38.854 ************************************ 00:12:38.854 17:47:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:38.854 17:47:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.854 17:47:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.854 17:47:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.113 ************************************ 00:12:39.113 START TEST raid_superblock_test 00:12:39.113 ************************************ 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74972 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74972 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74972 ']' 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.113 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.113 [2024-11-20 17:47:06.133575] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:39.113 [2024-11-20 17:47:06.134163] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74972 ] 00:12:39.373 [2024-11-20 17:47:06.308468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.373 [2024-11-20 17:47:06.449322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.632 [2024-11-20 17:47:06.690653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.632 [2024-11-20 17:47:06.690705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.892 17:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.892 malloc1 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.892 [2024-11-20 17:47:07.053812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:39.892 [2024-11-20 17:47:07.053892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.892 [2024-11-20 17:47:07.053919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:39.892 [2024-11-20 17:47:07.053930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.892 [2024-11-20 17:47:07.056412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.892 [2024-11-20 17:47:07.056448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:39.892 pt1 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.892 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.152 malloc2 00:12:40.152 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.152 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.152 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.152 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.152 [2024-11-20 17:47:07.120342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.152 [2024-11-20 17:47:07.120422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.152 [2024-11-20 17:47:07.120462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.152 [2024-11-20 17:47:07.120472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.152 [2024-11-20 17:47:07.122972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.153 [2024-11-20 17:47:07.123025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.153 pt2 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 malloc3 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 [2024-11-20 17:47:07.199278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.153 [2024-11-20 17:47:07.199350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.153 [2024-11-20 17:47:07.199377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:40.153 [2024-11-20 17:47:07.199387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.153 [2024-11-20 17:47:07.201813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.153 [2024-11-20 17:47:07.201848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.153 pt3 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 malloc4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 [2024-11-20 17:47:07.262969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:40.153 [2024-11-20 17:47:07.263057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.153 [2024-11-20 17:47:07.263081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:40.153 [2024-11-20 17:47:07.263091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.153 [2024-11-20 17:47:07.265576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.153 [2024-11-20 17:47:07.265712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:40.153 pt4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 [2024-11-20 17:47:07.274991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.153 [2024-11-20 17:47:07.277173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.153 [2024-11-20 17:47:07.277239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.153 [2024-11-20 17:47:07.277302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:40.153 [2024-11-20 17:47:07.277513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.153 [2024-11-20 17:47:07.277530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.153 [2024-11-20 17:47:07.277825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:40.153 [2024-11-20 17:47:07.278045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.153 [2024-11-20 17:47:07.278064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:40.153 [2024-11-20 17:47:07.278254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.412 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.412 "name": "raid_bdev1", 00:12:40.412 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:40.412 "strip_size_kb": 0, 00:12:40.413 "state": "online", 00:12:40.413 "raid_level": "raid1", 00:12:40.413 "superblock": true, 00:12:40.413 "num_base_bdevs": 4, 00:12:40.413 "num_base_bdevs_discovered": 4, 00:12:40.413 "num_base_bdevs_operational": 4, 00:12:40.413 "base_bdevs_list": [ 00:12:40.413 { 00:12:40.413 "name": "pt1", 00:12:40.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 2048, 00:12:40.413 "data_size": 63488 00:12:40.413 }, 00:12:40.413 { 00:12:40.413 "name": "pt2", 00:12:40.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 2048, 00:12:40.413 "data_size": 63488 00:12:40.413 }, 00:12:40.413 { 00:12:40.413 "name": "pt3", 00:12:40.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 2048, 00:12:40.413 "data_size": 63488 00:12:40.413 }, 00:12:40.413 { 00:12:40.413 "name": "pt4", 00:12:40.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.413 "is_configured": true, 00:12:40.413 "data_offset": 2048, 00:12:40.413 "data_size": 63488 00:12:40.413 } 00:12:40.413 ] 00:12:40.413 }' 00:12:40.413 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.413 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 [2024-11-20 17:47:07.722553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.671 "name": "raid_bdev1", 00:12:40.671 "aliases": [ 00:12:40.671 "6c049c14-dca1-45e0-96f7-72bed7814069" 00:12:40.671 ], 00:12:40.671 "product_name": "Raid Volume", 00:12:40.671 "block_size": 512, 00:12:40.671 "num_blocks": 63488, 00:12:40.671 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:40.671 "assigned_rate_limits": { 00:12:40.671 "rw_ios_per_sec": 0, 00:12:40.671 "rw_mbytes_per_sec": 0, 00:12:40.671 "r_mbytes_per_sec": 0, 00:12:40.671 "w_mbytes_per_sec": 0 00:12:40.671 }, 00:12:40.671 "claimed": false, 00:12:40.671 "zoned": false, 00:12:40.671 "supported_io_types": { 00:12:40.671 "read": true, 00:12:40.671 "write": true, 00:12:40.671 "unmap": false, 00:12:40.671 "flush": false, 00:12:40.671 "reset": true, 00:12:40.671 "nvme_admin": false, 00:12:40.671 "nvme_io": false, 00:12:40.671 "nvme_io_md": false, 00:12:40.671 "write_zeroes": true, 00:12:40.671 "zcopy": false, 00:12:40.671 "get_zone_info": false, 00:12:40.671 "zone_management": false, 00:12:40.671 "zone_append": false, 00:12:40.671 "compare": false, 00:12:40.671 "compare_and_write": false, 00:12:40.671 "abort": false, 00:12:40.671 "seek_hole": false, 00:12:40.671 "seek_data": false, 00:12:40.671 "copy": false, 00:12:40.671 "nvme_iov_md": false 00:12:40.671 }, 00:12:40.671 "memory_domains": [ 00:12:40.671 { 00:12:40.671 "dma_device_id": "system", 00:12:40.671 "dma_device_type": 1 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.671 "dma_device_type": 2 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "system", 00:12:40.671 "dma_device_type": 1 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.671 "dma_device_type": 2 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "system", 00:12:40.671 "dma_device_type": 1 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.671 "dma_device_type": 2 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "system", 00:12:40.671 "dma_device_type": 1 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.671 "dma_device_type": 2 00:12:40.671 } 00:12:40.671 ], 00:12:40.671 "driver_specific": { 00:12:40.671 "raid": { 00:12:40.671 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:40.671 "strip_size_kb": 0, 00:12:40.671 "state": "online", 00:12:40.671 "raid_level": "raid1", 00:12:40.671 "superblock": true, 00:12:40.671 "num_base_bdevs": 4, 00:12:40.671 "num_base_bdevs_discovered": 4, 00:12:40.671 "num_base_bdevs_operational": 4, 00:12:40.671 "base_bdevs_list": [ 00:12:40.671 { 00:12:40.671 "name": "pt1", 00:12:40.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.671 "is_configured": true, 00:12:40.671 "data_offset": 2048, 00:12:40.671 "data_size": 63488 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "name": "pt2", 00:12:40.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.671 "is_configured": true, 00:12:40.671 "data_offset": 2048, 00:12:40.671 "data_size": 63488 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "name": "pt3", 00:12:40.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.671 "is_configured": true, 00:12:40.671 "data_offset": 2048, 00:12:40.671 "data_size": 63488 00:12:40.671 }, 00:12:40.671 { 00:12:40.671 "name": "pt4", 00:12:40.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.671 "is_configured": true, 00:12:40.671 "data_offset": 2048, 00:12:40.671 "data_size": 63488 00:12:40.671 } 00:12:40.671 ] 00:12:40.671 } 00:12:40.671 } 00:12:40.671 }' 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.671 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:40.671 pt2 00:12:40.671 pt3 00:12:40.671 pt4' 00:12:40.672 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.931 17:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:40.931 [2024-11-20 17:47:08.061985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6c049c14-dca1-45e0-96f7-72bed7814069 00:12:40.931 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6c049c14-dca1-45e0-96f7-72bed7814069 ']' 00:12:40.932 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.932 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.932 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 [2024-11-20 17:47:08.109537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.192 [2024-11-20 17:47:08.109654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.192 [2024-11-20 17:47:08.109775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.192 [2024-11-20 17:47:08.109887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.192 [2024-11-20 17:47:08.109904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 [2024-11-20 17:47:08.269340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:41.192 [2024-11-20 17:47:08.271607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:41.192 [2024-11-20 17:47:08.271663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:41.192 [2024-11-20 17:47:08.271700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:41.192 [2024-11-20 17:47:08.271759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:41.192 [2024-11-20 17:47:08.271824] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:41.192 [2024-11-20 17:47:08.271844] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:41.192 [2024-11-20 17:47:08.271862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:41.192 [2024-11-20 17:47:08.271876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.192 [2024-11-20 17:47:08.271887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:41.192 request: 00:12:41.192 { 00:12:41.192 "name": "raid_bdev1", 00:12:41.192 "raid_level": "raid1", 00:12:41.192 "base_bdevs": [ 00:12:41.192 "malloc1", 00:12:41.192 "malloc2", 00:12:41.192 "malloc3", 00:12:41.192 "malloc4" 00:12:41.192 ], 00:12:41.192 "superblock": false, 00:12:41.192 "method": "bdev_raid_create", 00:12:41.192 "req_id": 1 00:12:41.192 } 00:12:41.192 Got JSON-RPC error response 00:12:41.192 response: 00:12:41.192 { 00:12:41.192 "code": -17, 00:12:41.192 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:41.192 } 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:41.192 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.193 [2024-11-20 17:47:08.337167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.193 [2024-11-20 17:47:08.337338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.193 [2024-11-20 17:47:08.337375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:41.193 [2024-11-20 17:47:08.337410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.193 [2024-11-20 17:47:08.339999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.193 [2024-11-20 17:47:08.340102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.193 [2024-11-20 17:47:08.340230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:41.193 [2024-11-20 17:47:08.340330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.193 pt1 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.193 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.452 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.452 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.452 "name": "raid_bdev1", 00:12:41.452 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:41.452 "strip_size_kb": 0, 00:12:41.452 "state": "configuring", 00:12:41.452 "raid_level": "raid1", 00:12:41.452 "superblock": true, 00:12:41.452 "num_base_bdevs": 4, 00:12:41.452 "num_base_bdevs_discovered": 1, 00:12:41.452 "num_base_bdevs_operational": 4, 00:12:41.452 "base_bdevs_list": [ 00:12:41.452 { 00:12:41.452 "name": "pt1", 00:12:41.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.452 "is_configured": true, 00:12:41.452 "data_offset": 2048, 00:12:41.452 "data_size": 63488 00:12:41.452 }, 00:12:41.452 { 00:12:41.452 "name": null, 00:12:41.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.452 "is_configured": false, 00:12:41.452 "data_offset": 2048, 00:12:41.452 "data_size": 63488 00:12:41.452 }, 00:12:41.452 { 00:12:41.452 "name": null, 00:12:41.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.452 "is_configured": false, 00:12:41.452 "data_offset": 2048, 00:12:41.452 "data_size": 63488 00:12:41.452 }, 00:12:41.452 { 00:12:41.452 "name": null, 00:12:41.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.452 "is_configured": false, 00:12:41.452 "data_offset": 2048, 00:12:41.452 "data_size": 63488 00:12:41.452 } 00:12:41.452 ] 00:12:41.452 }' 00:12:41.452 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.452 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.712 [2024-11-20 17:47:08.784474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.712 [2024-11-20 17:47:08.784689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.712 [2024-11-20 17:47:08.784721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:41.712 [2024-11-20 17:47:08.784733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.712 [2024-11-20 17:47:08.785279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.712 [2024-11-20 17:47:08.785300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.712 [2024-11-20 17:47:08.785401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:41.712 [2024-11-20 17:47:08.785429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.712 pt2 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.712 [2024-11-20 17:47:08.796444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.712 "name": "raid_bdev1", 00:12:41.712 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:41.712 "strip_size_kb": 0, 00:12:41.712 "state": "configuring", 00:12:41.712 "raid_level": "raid1", 00:12:41.712 "superblock": true, 00:12:41.712 "num_base_bdevs": 4, 00:12:41.712 "num_base_bdevs_discovered": 1, 00:12:41.712 "num_base_bdevs_operational": 4, 00:12:41.712 "base_bdevs_list": [ 00:12:41.712 { 00:12:41.712 "name": "pt1", 00:12:41.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.712 "is_configured": true, 00:12:41.712 "data_offset": 2048, 00:12:41.712 "data_size": 63488 00:12:41.712 }, 00:12:41.712 { 00:12:41.712 "name": null, 00:12:41.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.712 "is_configured": false, 00:12:41.712 "data_offset": 0, 00:12:41.712 "data_size": 63488 00:12:41.712 }, 00:12:41.712 { 00:12:41.712 "name": null, 00:12:41.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.712 "is_configured": false, 00:12:41.712 "data_offset": 2048, 00:12:41.712 "data_size": 63488 00:12:41.712 }, 00:12:41.712 { 00:12:41.712 "name": null, 00:12:41.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.712 "is_configured": false, 00:12:41.712 "data_offset": 2048, 00:12:41.712 "data_size": 63488 00:12:41.712 } 00:12:41.712 ] 00:12:41.712 }' 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.712 17:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.295 [2024-11-20 17:47:09.267640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.295 [2024-11-20 17:47:09.267828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.295 [2024-11-20 17:47:09.267870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:42.295 [2024-11-20 17:47:09.267904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.295 [2024-11-20 17:47:09.268491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.295 [2024-11-20 17:47:09.268553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.295 [2024-11-20 17:47:09.268689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:42.295 [2024-11-20 17:47:09.268744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.295 pt2 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.295 [2024-11-20 17:47:09.279547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:42.295 [2024-11-20 17:47:09.279641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.295 [2024-11-20 17:47:09.279679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:42.295 [2024-11-20 17:47:09.279707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.295 [2024-11-20 17:47:09.280154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.295 [2024-11-20 17:47:09.280208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:42.295 [2024-11-20 17:47:09.280306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:42.295 [2024-11-20 17:47:09.280354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:42.295 pt3 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.295 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.296 [2024-11-20 17:47:09.291488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:42.296 [2024-11-20 17:47:09.291532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.296 [2024-11-20 17:47:09.291549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:42.296 [2024-11-20 17:47:09.291558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.296 [2024-11-20 17:47:09.291961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.296 [2024-11-20 17:47:09.291977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:42.296 [2024-11-20 17:47:09.292052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:42.296 [2024-11-20 17:47:09.292078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:42.296 [2024-11-20 17:47:09.292233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:42.296 [2024-11-20 17:47:09.292241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.296 [2024-11-20 17:47:09.292500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:42.296 [2024-11-20 17:47:09.292660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:42.296 [2024-11-20 17:47:09.292681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:42.296 [2024-11-20 17:47:09.292858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.296 pt4 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.296 "name": "raid_bdev1", 00:12:42.296 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:42.296 "strip_size_kb": 0, 00:12:42.296 "state": "online", 00:12:42.296 "raid_level": "raid1", 00:12:42.296 "superblock": true, 00:12:42.296 "num_base_bdevs": 4, 00:12:42.296 "num_base_bdevs_discovered": 4, 00:12:42.296 "num_base_bdevs_operational": 4, 00:12:42.296 "base_bdevs_list": [ 00:12:42.296 { 00:12:42.296 "name": "pt1", 00:12:42.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.296 "is_configured": true, 00:12:42.296 "data_offset": 2048, 00:12:42.296 "data_size": 63488 00:12:42.296 }, 00:12:42.296 { 00:12:42.296 "name": "pt2", 00:12:42.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.296 "is_configured": true, 00:12:42.296 "data_offset": 2048, 00:12:42.296 "data_size": 63488 00:12:42.296 }, 00:12:42.296 { 00:12:42.296 "name": "pt3", 00:12:42.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.296 "is_configured": true, 00:12:42.296 "data_offset": 2048, 00:12:42.296 "data_size": 63488 00:12:42.296 }, 00:12:42.296 { 00:12:42.296 "name": "pt4", 00:12:42.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.296 "is_configured": true, 00:12:42.296 "data_offset": 2048, 00:12:42.296 "data_size": 63488 00:12:42.296 } 00:12:42.296 ] 00:12:42.296 }' 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.296 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.587 [2024-11-20 17:47:09.727239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.587 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.846 "name": "raid_bdev1", 00:12:42.846 "aliases": [ 00:12:42.846 "6c049c14-dca1-45e0-96f7-72bed7814069" 00:12:42.846 ], 00:12:42.846 "product_name": "Raid Volume", 00:12:42.846 "block_size": 512, 00:12:42.846 "num_blocks": 63488, 00:12:42.846 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:42.846 "assigned_rate_limits": { 00:12:42.846 "rw_ios_per_sec": 0, 00:12:42.846 "rw_mbytes_per_sec": 0, 00:12:42.846 "r_mbytes_per_sec": 0, 00:12:42.846 "w_mbytes_per_sec": 0 00:12:42.846 }, 00:12:42.846 "claimed": false, 00:12:42.846 "zoned": false, 00:12:42.846 "supported_io_types": { 00:12:42.846 "read": true, 00:12:42.846 "write": true, 00:12:42.846 "unmap": false, 00:12:42.846 "flush": false, 00:12:42.846 "reset": true, 00:12:42.846 "nvme_admin": false, 00:12:42.846 "nvme_io": false, 00:12:42.846 "nvme_io_md": false, 00:12:42.846 "write_zeroes": true, 00:12:42.846 "zcopy": false, 00:12:42.846 "get_zone_info": false, 00:12:42.846 "zone_management": false, 00:12:42.846 "zone_append": false, 00:12:42.846 "compare": false, 00:12:42.846 "compare_and_write": false, 00:12:42.846 "abort": false, 00:12:42.846 "seek_hole": false, 00:12:42.846 "seek_data": false, 00:12:42.846 "copy": false, 00:12:42.846 "nvme_iov_md": false 00:12:42.846 }, 00:12:42.846 "memory_domains": [ 00:12:42.846 { 00:12:42.846 "dma_device_id": "system", 00:12:42.846 "dma_device_type": 1 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.846 "dma_device_type": 2 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "system", 00:12:42.846 "dma_device_type": 1 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.846 "dma_device_type": 2 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "system", 00:12:42.846 "dma_device_type": 1 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.846 "dma_device_type": 2 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "system", 00:12:42.846 "dma_device_type": 1 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.846 "dma_device_type": 2 00:12:42.846 } 00:12:42.846 ], 00:12:42.846 "driver_specific": { 00:12:42.846 "raid": { 00:12:42.846 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:42.846 "strip_size_kb": 0, 00:12:42.846 "state": "online", 00:12:42.846 "raid_level": "raid1", 00:12:42.846 "superblock": true, 00:12:42.846 "num_base_bdevs": 4, 00:12:42.846 "num_base_bdevs_discovered": 4, 00:12:42.846 "num_base_bdevs_operational": 4, 00:12:42.846 "base_bdevs_list": [ 00:12:42.846 { 00:12:42.846 "name": "pt1", 00:12:42.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.846 "is_configured": true, 00:12:42.846 "data_offset": 2048, 00:12:42.846 "data_size": 63488 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "name": "pt2", 00:12:42.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.846 "is_configured": true, 00:12:42.846 "data_offset": 2048, 00:12:42.846 "data_size": 63488 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "name": "pt3", 00:12:42.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.846 "is_configured": true, 00:12:42.846 "data_offset": 2048, 00:12:42.846 "data_size": 63488 00:12:42.846 }, 00:12:42.846 { 00:12:42.846 "name": "pt4", 00:12:42.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.846 "is_configured": true, 00:12:42.846 "data_offset": 2048, 00:12:42.846 "data_size": 63488 00:12:42.846 } 00:12:42.846 ] 00:12:42.846 } 00:12:42.846 } 00:12:42.846 }' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:42.846 pt2 00:12:42.846 pt3 00:12:42.846 pt4' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.846 17:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.846 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.846 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.846 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.847 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:42.847 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.847 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.847 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.106 [2024-11-20 17:47:10.070557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6c049c14-dca1-45e0-96f7-72bed7814069 '!=' 6c049c14-dca1-45e0-96f7-72bed7814069 ']' 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.106 [2024-11-20 17:47:10.102239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.106 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.106 "name": "raid_bdev1", 00:12:43.106 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:43.106 "strip_size_kb": 0, 00:12:43.106 "state": "online", 00:12:43.106 "raid_level": "raid1", 00:12:43.106 "superblock": true, 00:12:43.106 "num_base_bdevs": 4, 00:12:43.106 "num_base_bdevs_discovered": 3, 00:12:43.106 "num_base_bdevs_operational": 3, 00:12:43.106 "base_bdevs_list": [ 00:12:43.106 { 00:12:43.106 "name": null, 00:12:43.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.106 "is_configured": false, 00:12:43.106 "data_offset": 0, 00:12:43.106 "data_size": 63488 00:12:43.106 }, 00:12:43.106 { 00:12:43.106 "name": "pt2", 00:12:43.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.106 "is_configured": true, 00:12:43.106 "data_offset": 2048, 00:12:43.106 "data_size": 63488 00:12:43.106 }, 00:12:43.106 { 00:12:43.106 "name": "pt3", 00:12:43.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.106 "is_configured": true, 00:12:43.106 "data_offset": 2048, 00:12:43.106 "data_size": 63488 00:12:43.106 }, 00:12:43.106 { 00:12:43.107 "name": "pt4", 00:12:43.107 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.107 "is_configured": true, 00:12:43.107 "data_offset": 2048, 00:12:43.107 "data_size": 63488 00:12:43.107 } 00:12:43.107 ] 00:12:43.107 }' 00:12:43.107 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.107 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 [2024-11-20 17:47:10.609431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.677 [2024-11-20 17:47:10.609580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.677 [2024-11-20 17:47:10.609719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.677 [2024-11-20 17:47:10.609899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.677 [2024-11-20 17:47:10.609948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 [2024-11-20 17:47:10.693212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.677 [2024-11-20 17:47:10.693378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.677 [2024-11-20 17:47:10.693403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:43.677 [2024-11-20 17:47:10.693412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.677 [2024-11-20 17:47:10.696021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.677 [2024-11-20 17:47:10.696055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.677 [2024-11-20 17:47:10.696147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:43.677 [2024-11-20 17:47:10.696205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.677 pt2 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.677 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.677 "name": "raid_bdev1", 00:12:43.677 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:43.677 "strip_size_kb": 0, 00:12:43.677 "state": "configuring", 00:12:43.677 "raid_level": "raid1", 00:12:43.677 "superblock": true, 00:12:43.677 "num_base_bdevs": 4, 00:12:43.677 "num_base_bdevs_discovered": 1, 00:12:43.677 "num_base_bdevs_operational": 3, 00:12:43.677 "base_bdevs_list": [ 00:12:43.677 { 00:12:43.677 "name": null, 00:12:43.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.677 "is_configured": false, 00:12:43.677 "data_offset": 2048, 00:12:43.677 "data_size": 63488 00:12:43.677 }, 00:12:43.677 { 00:12:43.677 "name": "pt2", 00:12:43.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.677 "is_configured": true, 00:12:43.677 "data_offset": 2048, 00:12:43.677 "data_size": 63488 00:12:43.677 }, 00:12:43.677 { 00:12:43.677 "name": null, 00:12:43.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.677 "is_configured": false, 00:12:43.677 "data_offset": 2048, 00:12:43.677 "data_size": 63488 00:12:43.677 }, 00:12:43.677 { 00:12:43.677 "name": null, 00:12:43.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.677 "is_configured": false, 00:12:43.677 "data_offset": 2048, 00:12:43.678 "data_size": 63488 00:12:43.678 } 00:12:43.678 ] 00:12:43.678 }' 00:12:43.678 17:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.678 17:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 [2024-11-20 17:47:11.172734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:44.246 [2024-11-20 17:47:11.172927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.246 [2024-11-20 17:47:11.172975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:44.246 [2024-11-20 17:47:11.173004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.246 [2024-11-20 17:47:11.173572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.246 [2024-11-20 17:47:11.173637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:44.246 [2024-11-20 17:47:11.173775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:44.246 [2024-11-20 17:47:11.173831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:44.246 pt3 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.246 "name": "raid_bdev1", 00:12:44.246 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:44.246 "strip_size_kb": 0, 00:12:44.246 "state": "configuring", 00:12:44.246 "raid_level": "raid1", 00:12:44.246 "superblock": true, 00:12:44.246 "num_base_bdevs": 4, 00:12:44.246 "num_base_bdevs_discovered": 2, 00:12:44.246 "num_base_bdevs_operational": 3, 00:12:44.246 "base_bdevs_list": [ 00:12:44.246 { 00:12:44.246 "name": null, 00:12:44.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.246 "is_configured": false, 00:12:44.246 "data_offset": 2048, 00:12:44.246 "data_size": 63488 00:12:44.246 }, 00:12:44.246 { 00:12:44.246 "name": "pt2", 00:12:44.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.246 "is_configured": true, 00:12:44.246 "data_offset": 2048, 00:12:44.246 "data_size": 63488 00:12:44.246 }, 00:12:44.246 { 00:12:44.246 "name": "pt3", 00:12:44.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.246 "is_configured": true, 00:12:44.246 "data_offset": 2048, 00:12:44.246 "data_size": 63488 00:12:44.246 }, 00:12:44.246 { 00:12:44.246 "name": null, 00:12:44.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.246 "is_configured": false, 00:12:44.246 "data_offset": 2048, 00:12:44.246 "data_size": 63488 00:12:44.246 } 00:12:44.246 ] 00:12:44.246 }' 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.246 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.507 [2024-11-20 17:47:11.640051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:44.507 [2024-11-20 17:47:11.640158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.507 [2024-11-20 17:47:11.640193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:44.507 [2024-11-20 17:47:11.640204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.507 [2024-11-20 17:47:11.640763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.507 [2024-11-20 17:47:11.640783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:44.507 [2024-11-20 17:47:11.640909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:44.507 [2024-11-20 17:47:11.640939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:44.507 [2024-11-20 17:47:11.641133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:44.507 [2024-11-20 17:47:11.641144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.507 [2024-11-20 17:47:11.641458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:44.507 [2024-11-20 17:47:11.641659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:44.507 [2024-11-20 17:47:11.641682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:44.507 [2024-11-20 17:47:11.641867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.507 pt4 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.507 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.766 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.766 "name": "raid_bdev1", 00:12:44.766 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:44.766 "strip_size_kb": 0, 00:12:44.766 "state": "online", 00:12:44.767 "raid_level": "raid1", 00:12:44.767 "superblock": true, 00:12:44.767 "num_base_bdevs": 4, 00:12:44.767 "num_base_bdevs_discovered": 3, 00:12:44.767 "num_base_bdevs_operational": 3, 00:12:44.767 "base_bdevs_list": [ 00:12:44.767 { 00:12:44.767 "name": null, 00:12:44.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.767 "is_configured": false, 00:12:44.767 "data_offset": 2048, 00:12:44.767 "data_size": 63488 00:12:44.767 }, 00:12:44.767 { 00:12:44.767 "name": "pt2", 00:12:44.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.767 "is_configured": true, 00:12:44.767 "data_offset": 2048, 00:12:44.767 "data_size": 63488 00:12:44.767 }, 00:12:44.767 { 00:12:44.767 "name": "pt3", 00:12:44.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.767 "is_configured": true, 00:12:44.767 "data_offset": 2048, 00:12:44.767 "data_size": 63488 00:12:44.767 }, 00:12:44.767 { 00:12:44.767 "name": "pt4", 00:12:44.767 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.767 "is_configured": true, 00:12:44.767 "data_offset": 2048, 00:12:44.767 "data_size": 63488 00:12:44.767 } 00:12:44.767 ] 00:12:44.767 }' 00:12:44.767 17:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.767 17:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.026 [2024-11-20 17:47:12.123174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.026 [2024-11-20 17:47:12.123222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.026 [2024-11-20 17:47:12.123326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.026 [2024-11-20 17:47:12.123414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.026 [2024-11-20 17:47:12.123428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.026 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.027 [2024-11-20 17:47:12.191050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:45.027 [2024-11-20 17:47:12.191244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.027 [2024-11-20 17:47:12.191269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:45.027 [2024-11-20 17:47:12.191283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.027 [2024-11-20 17:47:12.193960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.027 [2024-11-20 17:47:12.194004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:45.027 [2024-11-20 17:47:12.194111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:45.027 [2024-11-20 17:47:12.194170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:45.027 [2024-11-20 17:47:12.194332] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater pt1 00:12:45.027 than existing raid bdev raid_bdev1 (2) 00:12:45.027 [2024-11-20 17:47:12.194388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.027 [2024-11-20 17:47:12.194407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:45.027 [2024-11-20 17:47:12.194470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.027 [2024-11-20 17:47:12.194590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.027 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.286 "name": "raid_bdev1", 00:12:45.286 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:45.286 "strip_size_kb": 0, 00:12:45.286 "state": "configuring", 00:12:45.286 "raid_level": "raid1", 00:12:45.286 "superblock": true, 00:12:45.286 "num_base_bdevs": 4, 00:12:45.286 "num_base_bdevs_discovered": 2, 00:12:45.286 "num_base_bdevs_operational": 3, 00:12:45.286 "base_bdevs_list": [ 00:12:45.286 { 00:12:45.286 "name": null, 00:12:45.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.286 "is_configured": false, 00:12:45.286 "data_offset": 2048, 00:12:45.286 "data_size": 63488 00:12:45.286 }, 00:12:45.286 { 00:12:45.286 "name": "pt2", 00:12:45.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.286 "is_configured": true, 00:12:45.286 "data_offset": 2048, 00:12:45.286 "data_size": 63488 00:12:45.286 }, 00:12:45.286 { 00:12:45.286 "name": "pt3", 00:12:45.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.286 "is_configured": true, 00:12:45.286 "data_offset": 2048, 00:12:45.286 "data_size": 63488 00:12:45.286 }, 00:12:45.286 { 00:12:45.286 "name": null, 00:12:45.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.286 "is_configured": false, 00:12:45.286 "data_offset": 2048, 00:12:45.286 "data_size": 63488 00:12:45.286 } 00:12:45.286 ] 00:12:45.286 }' 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.286 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:45.546 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.546 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:45.546 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.546 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.547 [2024-11-20 17:47:12.690258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:45.547 [2024-11-20 17:47:12.690358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.547 [2024-11-20 17:47:12.690387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:45.547 [2024-11-20 17:47:12.690398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.547 [2024-11-20 17:47:12.690933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.547 [2024-11-20 17:47:12.690962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:45.547 [2024-11-20 17:47:12.691085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:45.547 [2024-11-20 17:47:12.691120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:45.547 [2024-11-20 17:47:12.691284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:45.547 [2024-11-20 17:47:12.691299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.547 [2024-11-20 17:47:12.691599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:45.547 [2024-11-20 17:47:12.691768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:45.547 [2024-11-20 17:47:12.691787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:45.547 [2024-11-20 17:47:12.691944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.547 pt4 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.547 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.807 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.807 "name": "raid_bdev1", 00:12:45.807 "uuid": "6c049c14-dca1-45e0-96f7-72bed7814069", 00:12:45.807 "strip_size_kb": 0, 00:12:45.807 "state": "online", 00:12:45.807 "raid_level": "raid1", 00:12:45.807 "superblock": true, 00:12:45.807 "num_base_bdevs": 4, 00:12:45.807 "num_base_bdevs_discovered": 3, 00:12:45.807 "num_base_bdevs_operational": 3, 00:12:45.807 "base_bdevs_list": [ 00:12:45.807 { 00:12:45.807 "name": null, 00:12:45.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.807 "is_configured": false, 00:12:45.807 "data_offset": 2048, 00:12:45.807 "data_size": 63488 00:12:45.807 }, 00:12:45.807 { 00:12:45.807 "name": "pt2", 00:12:45.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.807 "is_configured": true, 00:12:45.807 "data_offset": 2048, 00:12:45.807 "data_size": 63488 00:12:45.807 }, 00:12:45.807 { 00:12:45.807 "name": "pt3", 00:12:45.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.807 "is_configured": true, 00:12:45.807 "data_offset": 2048, 00:12:45.807 "data_size": 63488 00:12:45.807 }, 00:12:45.807 { 00:12:45.807 "name": "pt4", 00:12:45.807 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.807 "is_configured": true, 00:12:45.807 "data_offset": 2048, 00:12:45.807 "data_size": 63488 00:12:45.807 } 00:12:45.807 ] 00:12:45.807 }' 00:12:45.807 17:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.807 17:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:46.067 [2024-11-20 17:47:13.145836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6c049c14-dca1-45e0-96f7-72bed7814069 '!=' 6c049c14-dca1-45e0-96f7-72bed7814069 ']' 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74972 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74972 ']' 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74972 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74972 00:12:46.067 killing process with pid 74972 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74972' 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74972 00:12:46.067 [2024-11-20 17:47:13.214561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.067 [2024-11-20 17:47:13.214691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.067 17:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74972 00:12:46.067 [2024-11-20 17:47:13.214780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.067 [2024-11-20 17:47:13.214795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:46.636 [2024-11-20 17:47:13.672096] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.018 17:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:48.018 00:12:48.018 real 0m8.917s 00:12:48.018 user 0m13.844s 00:12:48.018 sys 0m1.635s 00:12:48.018 17:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.019 17:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.019 ************************************ 00:12:48.019 END TEST raid_superblock_test 00:12:48.019 ************************************ 00:12:48.019 17:47:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:48.019 17:47:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.019 17:47:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.019 17:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.019 ************************************ 00:12:48.019 START TEST raid_read_error_test 00:12:48.019 ************************************ 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fgAU9gZGpP 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75460 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75460 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75460 ']' 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.019 17:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.019 [2024-11-20 17:47:15.145181] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:48.019 [2024-11-20 17:47:15.145319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75460 ] 00:12:48.278 [2024-11-20 17:47:15.324625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.538 [2024-11-20 17:47:15.465688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.538 [2024-11-20 17:47:15.702985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.538 [2024-11-20 17:47:15.703055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 BaseBdev1_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 true 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 [2024-11-20 17:47:16.088926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:49.108 [2024-11-20 17:47:16.089025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.108 [2024-11-20 17:47:16.089050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:49.108 [2024-11-20 17:47:16.089064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.108 [2024-11-20 17:47:16.091676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.108 [2024-11-20 17:47:16.091717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.108 BaseBdev1 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 BaseBdev2_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 true 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 [2024-11-20 17:47:16.167001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:49.108 [2024-11-20 17:47:16.167076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.108 [2024-11-20 17:47:16.167094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:49.108 [2024-11-20 17:47:16.167106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.108 [2024-11-20 17:47:16.169711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.108 [2024-11-20 17:47:16.169754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.108 BaseBdev2 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 BaseBdev3_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 true 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 [2024-11-20 17:47:16.262004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:49.108 [2024-11-20 17:47:16.262093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.108 [2024-11-20 17:47:16.262124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:49.108 [2024-11-20 17:47:16.262136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.108 [2024-11-20 17:47:16.264786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.108 [2024-11-20 17:47:16.264835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:49.108 BaseBdev3 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.108 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.368 BaseBdev4_malloc 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.368 true 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.368 [2024-11-20 17:47:16.337060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:49.368 [2024-11-20 17:47:16.337134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.368 [2024-11-20 17:47:16.337154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:49.368 [2024-11-20 17:47:16.337165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.368 [2024-11-20 17:47:16.339586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.368 [2024-11-20 17:47:16.339626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:49.368 BaseBdev4 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.368 [2024-11-20 17:47:16.349132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.368 [2024-11-20 17:47:16.351297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.368 [2024-11-20 17:47:16.351379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.368 [2024-11-20 17:47:16.351443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.368 [2024-11-20 17:47:16.351710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:49.368 [2024-11-20 17:47:16.351731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.368 [2024-11-20 17:47:16.352047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:49.368 [2024-11-20 17:47:16.352258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:49.368 [2024-11-20 17:47:16.352274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:49.368 [2024-11-20 17:47:16.352479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.368 "name": "raid_bdev1", 00:12:49.368 "uuid": "cc4fc961-086b-4ed3-9f92-373bfb90c78a", 00:12:49.368 "strip_size_kb": 0, 00:12:49.368 "state": "online", 00:12:49.368 "raid_level": "raid1", 00:12:49.368 "superblock": true, 00:12:49.368 "num_base_bdevs": 4, 00:12:49.368 "num_base_bdevs_discovered": 4, 00:12:49.368 "num_base_bdevs_operational": 4, 00:12:49.368 "base_bdevs_list": [ 00:12:49.368 { 00:12:49.368 "name": "BaseBdev1", 00:12:49.368 "uuid": "a7593c3d-e113-5cff-a7b8-e195e7afa077", 00:12:49.368 "is_configured": true, 00:12:49.368 "data_offset": 2048, 00:12:49.368 "data_size": 63488 00:12:49.368 }, 00:12:49.368 { 00:12:49.368 "name": "BaseBdev2", 00:12:49.368 "uuid": "dcd86202-8b1e-57d3-9064-4b7f9d583a5d", 00:12:49.368 "is_configured": true, 00:12:49.368 "data_offset": 2048, 00:12:49.368 "data_size": 63488 00:12:49.368 }, 00:12:49.368 { 00:12:49.368 "name": "BaseBdev3", 00:12:49.368 "uuid": "6fd3e654-8d8c-57a0-b438-e91a3457067e", 00:12:49.368 "is_configured": true, 00:12:49.368 "data_offset": 2048, 00:12:49.368 "data_size": 63488 00:12:49.368 }, 00:12:49.368 { 00:12:49.368 "name": "BaseBdev4", 00:12:49.368 "uuid": "7891159f-cf12-5773-a94b-e1806a633e2f", 00:12:49.368 "is_configured": true, 00:12:49.368 "data_offset": 2048, 00:12:49.368 "data_size": 63488 00:12:49.368 } 00:12:49.368 ] 00:12:49.368 }' 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.368 17:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.628 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:49.628 17:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:49.887 [2024-11-20 17:47:16.873757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.827 "name": "raid_bdev1", 00:12:50.827 "uuid": "cc4fc961-086b-4ed3-9f92-373bfb90c78a", 00:12:50.827 "strip_size_kb": 0, 00:12:50.827 "state": "online", 00:12:50.827 "raid_level": "raid1", 00:12:50.827 "superblock": true, 00:12:50.827 "num_base_bdevs": 4, 00:12:50.827 "num_base_bdevs_discovered": 4, 00:12:50.827 "num_base_bdevs_operational": 4, 00:12:50.827 "base_bdevs_list": [ 00:12:50.827 { 00:12:50.827 "name": "BaseBdev1", 00:12:50.827 "uuid": "a7593c3d-e113-5cff-a7b8-e195e7afa077", 00:12:50.827 "is_configured": true, 00:12:50.827 "data_offset": 2048, 00:12:50.827 "data_size": 63488 00:12:50.827 }, 00:12:50.827 { 00:12:50.827 "name": "BaseBdev2", 00:12:50.827 "uuid": "dcd86202-8b1e-57d3-9064-4b7f9d583a5d", 00:12:50.827 "is_configured": true, 00:12:50.827 "data_offset": 2048, 00:12:50.827 "data_size": 63488 00:12:50.827 }, 00:12:50.827 { 00:12:50.827 "name": "BaseBdev3", 00:12:50.827 "uuid": "6fd3e654-8d8c-57a0-b438-e91a3457067e", 00:12:50.827 "is_configured": true, 00:12:50.827 "data_offset": 2048, 00:12:50.827 "data_size": 63488 00:12:50.827 }, 00:12:50.827 { 00:12:50.827 "name": "BaseBdev4", 00:12:50.827 "uuid": "7891159f-cf12-5773-a94b-e1806a633e2f", 00:12:50.827 "is_configured": true, 00:12:50.827 "data_offset": 2048, 00:12:50.827 "data_size": 63488 00:12:50.827 } 00:12:50.827 ] 00:12:50.827 }' 00:12:50.827 17:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.828 17:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.167 [2024-11-20 17:47:18.264485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.167 [2024-11-20 17:47:18.264540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.167 [2024-11-20 17:47:18.267630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.167 [2024-11-20 17:47:18.267701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.167 [2024-11-20 17:47:18.267837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.167 [2024-11-20 17:47:18.267857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:51.167 { 00:12:51.167 "results": [ 00:12:51.167 { 00:12:51.167 "job": "raid_bdev1", 00:12:51.167 "core_mask": "0x1", 00:12:51.167 "workload": "randrw", 00:12:51.167 "percentage": 50, 00:12:51.167 "status": "finished", 00:12:51.167 "queue_depth": 1, 00:12:51.167 "io_size": 131072, 00:12:51.167 "runtime": 1.39136, 00:12:51.167 "iops": 7650.78771849126, 00:12:51.167 "mibps": 956.3484648114076, 00:12:51.167 "io_failed": 0, 00:12:51.167 "io_timeout": 0, 00:12:51.167 "avg_latency_us": 127.98743998966243, 00:12:51.167 "min_latency_us": 23.811353711790392, 00:12:51.167 "max_latency_us": 1516.7720524017468 00:12:51.167 } 00:12:51.167 ], 00:12:51.167 "core_count": 1 00:12:51.167 } 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75460 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75460 ']' 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75460 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75460 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.167 killing process with pid 75460 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75460' 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75460 00:12:51.167 [2024-11-20 17:47:18.296504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.167 17:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75460 00:12:51.737 [2024-11-20 17:47:18.677687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fgAU9gZGpP 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:53.118 00:12:53.118 real 0m4.969s 00:12:53.118 user 0m5.714s 00:12:53.118 sys 0m0.713s 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.118 17:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.118 ************************************ 00:12:53.118 END TEST raid_read_error_test 00:12:53.118 ************************************ 00:12:53.118 17:47:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:53.118 17:47:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:53.118 17:47:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.118 17:47:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.118 ************************************ 00:12:53.118 START TEST raid_write_error_test 00:12:53.118 ************************************ 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o6q7zjuMVP 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75612 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75612 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75612 ']' 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.119 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.119 [2024-11-20 17:47:20.167666] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:53.119 [2024-11-20 17:47:20.167798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75612 ] 00:12:53.379 [2024-11-20 17:47:20.342383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.379 [2024-11-20 17:47:20.488029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.639 [2024-11-20 17:47:20.737003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.639 [2024-11-20 17:47:20.737089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.899 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.899 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:53.899 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.899 17:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.899 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.899 17:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.899 BaseBdev1_malloc 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.899 true 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.899 [2024-11-20 17:47:21.066962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:53.899 [2024-11-20 17:47:21.067055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.899 [2024-11-20 17:47:21.067076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:53.899 [2024-11-20 17:47:21.067088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.899 [2024-11-20 17:47:21.069525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.899 [2024-11-20 17:47:21.069564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.899 BaseBdev1 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.899 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 BaseBdev2_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 true 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 [2024-11-20 17:47:21.138416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:54.163 [2024-11-20 17:47:21.138480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.163 [2024-11-20 17:47:21.138498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:54.163 [2024-11-20 17:47:21.138509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.163 [2024-11-20 17:47:21.140857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.163 [2024-11-20 17:47:21.140891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.163 BaseBdev2 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 BaseBdev3_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 true 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 [2024-11-20 17:47:21.223040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:54.163 [2024-11-20 17:47:21.223095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.163 [2024-11-20 17:47:21.223111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:54.163 [2024-11-20 17:47:21.223121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.163 [2024-11-20 17:47:21.225513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.163 [2024-11-20 17:47:21.225551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:54.163 BaseBdev3 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 BaseBdev4_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 true 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 [2024-11-20 17:47:21.294394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:54.163 [2024-11-20 17:47:21.294451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.163 [2024-11-20 17:47:21.294468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:54.163 [2024-11-20 17:47:21.294480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.163 [2024-11-20 17:47:21.296784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.163 [2024-11-20 17:47:21.296826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:54.163 BaseBdev4 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 [2024-11-20 17:47:21.306434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.163 [2024-11-20 17:47:21.308483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.163 [2024-11-20 17:47:21.308559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.163 [2024-11-20 17:47:21.308617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.163 [2024-11-20 17:47:21.308866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:54.163 [2024-11-20 17:47:21.308890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.163 [2024-11-20 17:47:21.309146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:54.163 [2024-11-20 17:47:21.309331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:54.163 [2024-11-20 17:47:21.309346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:54.163 [2024-11-20 17:47:21.309498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:54.163 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.164 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.423 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.423 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.423 "name": "raid_bdev1", 00:12:54.423 "uuid": "1d162f4d-be4e-4c7b-ae13-ad5bf756dacf", 00:12:54.423 "strip_size_kb": 0, 00:12:54.423 "state": "online", 00:12:54.423 "raid_level": "raid1", 00:12:54.423 "superblock": true, 00:12:54.423 "num_base_bdevs": 4, 00:12:54.423 "num_base_bdevs_discovered": 4, 00:12:54.423 "num_base_bdevs_operational": 4, 00:12:54.423 "base_bdevs_list": [ 00:12:54.423 { 00:12:54.423 "name": "BaseBdev1", 00:12:54.423 "uuid": "4b52f623-2ee8-568a-81a2-ffef5352fc10", 00:12:54.423 "is_configured": true, 00:12:54.423 "data_offset": 2048, 00:12:54.423 "data_size": 63488 00:12:54.423 }, 00:12:54.423 { 00:12:54.423 "name": "BaseBdev2", 00:12:54.423 "uuid": "0079bbde-502f-5d6b-a714-55558ce9c099", 00:12:54.423 "is_configured": true, 00:12:54.423 "data_offset": 2048, 00:12:54.423 "data_size": 63488 00:12:54.423 }, 00:12:54.423 { 00:12:54.423 "name": "BaseBdev3", 00:12:54.423 "uuid": "f6c96e15-9f61-55b3-befa-02f65a40abe5", 00:12:54.423 "is_configured": true, 00:12:54.423 "data_offset": 2048, 00:12:54.423 "data_size": 63488 00:12:54.423 }, 00:12:54.423 { 00:12:54.423 "name": "BaseBdev4", 00:12:54.423 "uuid": "956892e0-9348-5d47-9734-281155c62ec0", 00:12:54.423 "is_configured": true, 00:12:54.423 "data_offset": 2048, 00:12:54.423 "data_size": 63488 00:12:54.423 } 00:12:54.423 ] 00:12:54.423 }' 00:12:54.423 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.423 17:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.682 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:54.682 17:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:54.942 [2024-11-20 17:47:21.894784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.880 [2024-11-20 17:47:22.803616] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:55.880 [2024-11-20 17:47:22.803696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.880 [2024-11-20 17:47:22.803952] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.880 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.880 "name": "raid_bdev1", 00:12:55.880 "uuid": "1d162f4d-be4e-4c7b-ae13-ad5bf756dacf", 00:12:55.880 "strip_size_kb": 0, 00:12:55.880 "state": "online", 00:12:55.880 "raid_level": "raid1", 00:12:55.880 "superblock": true, 00:12:55.880 "num_base_bdevs": 4, 00:12:55.880 "num_base_bdevs_discovered": 3, 00:12:55.880 "num_base_bdevs_operational": 3, 00:12:55.880 "base_bdevs_list": [ 00:12:55.880 { 00:12:55.880 "name": null, 00:12:55.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.880 "is_configured": false, 00:12:55.880 "data_offset": 0, 00:12:55.880 "data_size": 63488 00:12:55.880 }, 00:12:55.880 { 00:12:55.880 "name": "BaseBdev2", 00:12:55.880 "uuid": "0079bbde-502f-5d6b-a714-55558ce9c099", 00:12:55.880 "is_configured": true, 00:12:55.880 "data_offset": 2048, 00:12:55.880 "data_size": 63488 00:12:55.880 }, 00:12:55.880 { 00:12:55.881 "name": "BaseBdev3", 00:12:55.881 "uuid": "f6c96e15-9f61-55b3-befa-02f65a40abe5", 00:12:55.881 "is_configured": true, 00:12:55.881 "data_offset": 2048, 00:12:55.881 "data_size": 63488 00:12:55.881 }, 00:12:55.881 { 00:12:55.881 "name": "BaseBdev4", 00:12:55.881 "uuid": "956892e0-9348-5d47-9734-281155c62ec0", 00:12:55.881 "is_configured": true, 00:12:55.881 "data_offset": 2048, 00:12:55.881 "data_size": 63488 00:12:55.881 } 00:12:55.881 ] 00:12:55.881 }' 00:12:55.881 17:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.881 17:47:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.140 [2024-11-20 17:47:23.258123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.140 [2024-11-20 17:47:23.258175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.140 [2024-11-20 17:47:23.260984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.140 [2024-11-20 17:47:23.261052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.140 [2024-11-20 17:47:23.261164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.140 [2024-11-20 17:47:23.261183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:56.140 { 00:12:56.140 "results": [ 00:12:56.140 { 00:12:56.140 "job": "raid_bdev1", 00:12:56.140 "core_mask": "0x1", 00:12:56.140 "workload": "randrw", 00:12:56.140 "percentage": 50, 00:12:56.140 "status": "finished", 00:12:56.140 "queue_depth": 1, 00:12:56.140 "io_size": 131072, 00:12:56.140 "runtime": 1.363876, 00:12:56.140 "iops": 8696.538394986055, 00:12:56.140 "mibps": 1087.067299373257, 00:12:56.140 "io_failed": 0, 00:12:56.140 "io_timeout": 0, 00:12:56.140 "avg_latency_us": 112.37218656129275, 00:12:56.140 "min_latency_us": 22.022707423580787, 00:12:56.140 "max_latency_us": 1402.2986899563318 00:12:56.140 } 00:12:56.140 ], 00:12:56.140 "core_count": 1 00:12:56.140 } 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75612 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75612 ']' 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75612 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75612 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.140 killing process with pid 75612 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75612' 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75612 00:12:56.140 [2024-11-20 17:47:23.309889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.140 17:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75612 00:12:56.709 [2024-11-20 17:47:23.659538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o6q7zjuMVP 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:58.090 00:12:58.090 real 0m4.913s 00:12:58.090 user 0m5.680s 00:12:58.090 sys 0m0.693s 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.090 17:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.090 ************************************ 00:12:58.090 END TEST raid_write_error_test 00:12:58.090 ************************************ 00:12:58.090 17:47:25 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:58.090 17:47:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:58.090 17:47:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:58.090 17:47:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:58.090 17:47:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.090 17:47:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.090 ************************************ 00:12:58.090 START TEST raid_rebuild_test 00:12:58.090 ************************************ 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75756 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75756 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75756 ']' 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.090 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.090 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.090 Zero copy mechanism will not be used. 00:12:58.090 [2024-11-20 17:47:25.148981] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:12:58.090 [2024-11-20 17:47:25.149119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75756 ] 00:12:58.350 [2024-11-20 17:47:25.306564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.350 [2024-11-20 17:47:25.450582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.608 [2024-11-20 17:47:25.696880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.608 [2024-11-20 17:47:25.696937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.867 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.867 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:58.867 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.867 17:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:58.867 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.867 17:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 BaseBdev1_malloc 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 [2024-11-20 17:47:26.056423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:59.128 [2024-11-20 17:47:26.056508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.128 [2024-11-20 17:47:26.056533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.128 [2024-11-20 17:47:26.056546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.128 [2024-11-20 17:47:26.059067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.128 [2024-11-20 17:47:26.059106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.128 BaseBdev1 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 BaseBdev2_malloc 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 [2024-11-20 17:47:26.119843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:59.128 [2024-11-20 17:47:26.119924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.128 [2024-11-20 17:47:26.119948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.128 [2024-11-20 17:47:26.119961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.128 [2024-11-20 17:47:26.122360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.128 [2024-11-20 17:47:26.122416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.128 BaseBdev2 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 spare_malloc 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 spare_delay 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 [2024-11-20 17:47:26.208148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.128 [2024-11-20 17:47:26.208226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.128 [2024-11-20 17:47:26.208249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:59.128 [2024-11-20 17:47:26.208263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.128 [2024-11-20 17:47:26.210729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.128 [2024-11-20 17:47:26.210768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.128 spare 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 [2024-11-20 17:47:26.220183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.128 [2024-11-20 17:47:26.222264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.128 [2024-11-20 17:47:26.222358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.128 [2024-11-20 17:47:26.222372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:59.128 [2024-11-20 17:47:26.222629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.128 [2024-11-20 17:47:26.222800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.128 [2024-11-20 17:47:26.222817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.128 [2024-11-20 17:47:26.223003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.128 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.128 "name": "raid_bdev1", 00:12:59.128 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:12:59.128 "strip_size_kb": 0, 00:12:59.128 "state": "online", 00:12:59.129 "raid_level": "raid1", 00:12:59.129 "superblock": false, 00:12:59.129 "num_base_bdevs": 2, 00:12:59.129 "num_base_bdevs_discovered": 2, 00:12:59.129 "num_base_bdevs_operational": 2, 00:12:59.129 "base_bdevs_list": [ 00:12:59.129 { 00:12:59.129 "name": "BaseBdev1", 00:12:59.129 "uuid": "9cee7ffd-7630-5f23-b972-a2253cada6f2", 00:12:59.129 "is_configured": true, 00:12:59.129 "data_offset": 0, 00:12:59.129 "data_size": 65536 00:12:59.129 }, 00:12:59.129 { 00:12:59.129 "name": "BaseBdev2", 00:12:59.129 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:12:59.129 "is_configured": true, 00:12:59.129 "data_offset": 0, 00:12:59.129 "data_size": 65536 00:12:59.129 } 00:12:59.129 ] 00:12:59.129 }' 00:12:59.129 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.129 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:59.698 [2024-11-20 17:47:26.711686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.698 17:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:59.958 [2024-11-20 17:47:26.995031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:59.958 /dev/nbd0 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.958 1+0 records in 00:12:59.958 1+0 records out 00:12:59.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486189 s, 8.4 MB/s 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:59.958 17:47:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:05.238 65536+0 records in 00:13:05.238 65536+0 records out 00:13:05.238 33554432 bytes (34 MB, 32 MiB) copied, 4.5817 s, 7.3 MB/s 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:05.238 [2024-11-20 17:47:31.839596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.238 [2024-11-20 17:47:31.871616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.238 "name": "raid_bdev1", 00:13:05.238 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:05.238 "strip_size_kb": 0, 00:13:05.238 "state": "online", 00:13:05.238 "raid_level": "raid1", 00:13:05.238 "superblock": false, 00:13:05.238 "num_base_bdevs": 2, 00:13:05.238 "num_base_bdevs_discovered": 1, 00:13:05.238 "num_base_bdevs_operational": 1, 00:13:05.238 "base_bdevs_list": [ 00:13:05.238 { 00:13:05.238 "name": null, 00:13:05.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.238 "is_configured": false, 00:13:05.238 "data_offset": 0, 00:13:05.238 "data_size": 65536 00:13:05.238 }, 00:13:05.238 { 00:13:05.238 "name": "BaseBdev2", 00:13:05.238 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:05.238 "is_configured": true, 00:13:05.238 "data_offset": 0, 00:13:05.238 "data_size": 65536 00:13:05.238 } 00:13:05.238 ] 00:13:05.238 }' 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.238 17:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.238 17:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.238 17:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.238 17:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.238 [2024-11-20 17:47:32.259120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.238 [2024-11-20 17:47:32.277301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:05.238 17:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.238 17:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:05.238 [2024-11-20 17:47:32.279512] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.177 "name": "raid_bdev1", 00:13:06.177 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:06.177 "strip_size_kb": 0, 00:13:06.177 "state": "online", 00:13:06.177 "raid_level": "raid1", 00:13:06.177 "superblock": false, 00:13:06.177 "num_base_bdevs": 2, 00:13:06.177 "num_base_bdevs_discovered": 2, 00:13:06.177 "num_base_bdevs_operational": 2, 00:13:06.177 "process": { 00:13:06.177 "type": "rebuild", 00:13:06.177 "target": "spare", 00:13:06.177 "progress": { 00:13:06.177 "blocks": 20480, 00:13:06.177 "percent": 31 00:13:06.177 } 00:13:06.177 }, 00:13:06.177 "base_bdevs_list": [ 00:13:06.177 { 00:13:06.177 "name": "spare", 00:13:06.177 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:06.177 "is_configured": true, 00:13:06.177 "data_offset": 0, 00:13:06.177 "data_size": 65536 00:13:06.177 }, 00:13:06.177 { 00:13:06.177 "name": "BaseBdev2", 00:13:06.177 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:06.177 "is_configured": true, 00:13:06.177 "data_offset": 0, 00:13:06.177 "data_size": 65536 00:13:06.177 } 00:13:06.177 ] 00:13:06.177 }' 00:13:06.177 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.437 [2024-11-20 17:47:33.419202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.437 [2024-11-20 17:47:33.488927] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.437 [2024-11-20 17:47:33.489016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.437 [2024-11-20 17:47:33.489033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.437 [2024-11-20 17:47:33.489045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.437 "name": "raid_bdev1", 00:13:06.437 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:06.437 "strip_size_kb": 0, 00:13:06.437 "state": "online", 00:13:06.437 "raid_level": "raid1", 00:13:06.437 "superblock": false, 00:13:06.437 "num_base_bdevs": 2, 00:13:06.437 "num_base_bdevs_discovered": 1, 00:13:06.437 "num_base_bdevs_operational": 1, 00:13:06.437 "base_bdevs_list": [ 00:13:06.437 { 00:13:06.437 "name": null, 00:13:06.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.437 "is_configured": false, 00:13:06.437 "data_offset": 0, 00:13:06.437 "data_size": 65536 00:13:06.437 }, 00:13:06.437 { 00:13:06.437 "name": "BaseBdev2", 00:13:06.437 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:06.437 "is_configured": true, 00:13:06.437 "data_offset": 0, 00:13:06.437 "data_size": 65536 00:13:06.437 } 00:13:06.437 ] 00:13:06.437 }' 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.437 17:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.004 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.004 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.004 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.004 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.004 17:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.004 "name": "raid_bdev1", 00:13:07.004 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:07.004 "strip_size_kb": 0, 00:13:07.004 "state": "online", 00:13:07.004 "raid_level": "raid1", 00:13:07.004 "superblock": false, 00:13:07.004 "num_base_bdevs": 2, 00:13:07.004 "num_base_bdevs_discovered": 1, 00:13:07.004 "num_base_bdevs_operational": 1, 00:13:07.004 "base_bdevs_list": [ 00:13:07.004 { 00:13:07.004 "name": null, 00:13:07.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.004 "is_configured": false, 00:13:07.004 "data_offset": 0, 00:13:07.004 "data_size": 65536 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "name": "BaseBdev2", 00:13:07.004 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:07.004 "is_configured": true, 00:13:07.004 "data_offset": 0, 00:13:07.004 "data_size": 65536 00:13:07.004 } 00:13:07.004 ] 00:13:07.004 }' 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.004 [2024-11-20 17:47:34.134439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.004 [2024-11-20 17:47:34.152126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.004 17:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:07.004 [2024-11-20 17:47:34.154294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.426 "name": "raid_bdev1", 00:13:08.426 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:08.426 "strip_size_kb": 0, 00:13:08.426 "state": "online", 00:13:08.426 "raid_level": "raid1", 00:13:08.426 "superblock": false, 00:13:08.426 "num_base_bdevs": 2, 00:13:08.426 "num_base_bdevs_discovered": 2, 00:13:08.426 "num_base_bdevs_operational": 2, 00:13:08.426 "process": { 00:13:08.426 "type": "rebuild", 00:13:08.426 "target": "spare", 00:13:08.426 "progress": { 00:13:08.426 "blocks": 20480, 00:13:08.426 "percent": 31 00:13:08.426 } 00:13:08.426 }, 00:13:08.426 "base_bdevs_list": [ 00:13:08.426 { 00:13:08.426 "name": "spare", 00:13:08.426 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:08.426 "is_configured": true, 00:13:08.426 "data_offset": 0, 00:13:08.426 "data_size": 65536 00:13:08.426 }, 00:13:08.426 { 00:13:08.426 "name": "BaseBdev2", 00:13:08.426 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:08.426 "is_configured": true, 00:13:08.426 "data_offset": 0, 00:13:08.426 "data_size": 65536 00:13:08.426 } 00:13:08.426 ] 00:13:08.426 }' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=384 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.426 "name": "raid_bdev1", 00:13:08.426 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:08.426 "strip_size_kb": 0, 00:13:08.426 "state": "online", 00:13:08.426 "raid_level": "raid1", 00:13:08.426 "superblock": false, 00:13:08.426 "num_base_bdevs": 2, 00:13:08.426 "num_base_bdevs_discovered": 2, 00:13:08.426 "num_base_bdevs_operational": 2, 00:13:08.426 "process": { 00:13:08.426 "type": "rebuild", 00:13:08.426 "target": "spare", 00:13:08.426 "progress": { 00:13:08.426 "blocks": 22528, 00:13:08.426 "percent": 34 00:13:08.426 } 00:13:08.426 }, 00:13:08.426 "base_bdevs_list": [ 00:13:08.426 { 00:13:08.426 "name": "spare", 00:13:08.426 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:08.426 "is_configured": true, 00:13:08.426 "data_offset": 0, 00:13:08.426 "data_size": 65536 00:13:08.426 }, 00:13:08.426 { 00:13:08.426 "name": "BaseBdev2", 00:13:08.426 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:08.426 "is_configured": true, 00:13:08.426 "data_offset": 0, 00:13:08.426 "data_size": 65536 00:13:08.426 } 00:13:08.426 ] 00:13:08.426 }' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.426 17:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.364 "name": "raid_bdev1", 00:13:09.364 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:09.364 "strip_size_kb": 0, 00:13:09.364 "state": "online", 00:13:09.364 "raid_level": "raid1", 00:13:09.364 "superblock": false, 00:13:09.364 "num_base_bdevs": 2, 00:13:09.364 "num_base_bdevs_discovered": 2, 00:13:09.364 "num_base_bdevs_operational": 2, 00:13:09.364 "process": { 00:13:09.364 "type": "rebuild", 00:13:09.364 "target": "spare", 00:13:09.364 "progress": { 00:13:09.364 "blocks": 47104, 00:13:09.364 "percent": 71 00:13:09.364 } 00:13:09.364 }, 00:13:09.364 "base_bdevs_list": [ 00:13:09.364 { 00:13:09.364 "name": "spare", 00:13:09.364 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:09.364 "is_configured": true, 00:13:09.364 "data_offset": 0, 00:13:09.364 "data_size": 65536 00:13:09.364 }, 00:13:09.364 { 00:13:09.364 "name": "BaseBdev2", 00:13:09.364 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:09.364 "is_configured": true, 00:13:09.364 "data_offset": 0, 00:13:09.364 "data_size": 65536 00:13:09.364 } 00:13:09.364 ] 00:13:09.364 }' 00:13:09.364 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.624 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.624 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.624 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.624 17:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.563 [2024-11-20 17:47:37.379556] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:10.563 [2024-11-20 17:47:37.379664] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:10.563 [2024-11-20 17:47:37.379718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.563 "name": "raid_bdev1", 00:13:10.563 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:10.563 "strip_size_kb": 0, 00:13:10.563 "state": "online", 00:13:10.563 "raid_level": "raid1", 00:13:10.563 "superblock": false, 00:13:10.563 "num_base_bdevs": 2, 00:13:10.563 "num_base_bdevs_discovered": 2, 00:13:10.563 "num_base_bdevs_operational": 2, 00:13:10.563 "base_bdevs_list": [ 00:13:10.563 { 00:13:10.563 "name": "spare", 00:13:10.563 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:10.563 "is_configured": true, 00:13:10.563 "data_offset": 0, 00:13:10.563 "data_size": 65536 00:13:10.563 }, 00:13:10.563 { 00:13:10.563 "name": "BaseBdev2", 00:13:10.563 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:10.563 "is_configured": true, 00:13:10.563 "data_offset": 0, 00:13:10.563 "data_size": 65536 00:13:10.563 } 00:13:10.563 ] 00:13:10.563 }' 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:10.563 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.822 "name": "raid_bdev1", 00:13:10.822 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:10.822 "strip_size_kb": 0, 00:13:10.822 "state": "online", 00:13:10.822 "raid_level": "raid1", 00:13:10.822 "superblock": false, 00:13:10.822 "num_base_bdevs": 2, 00:13:10.822 "num_base_bdevs_discovered": 2, 00:13:10.822 "num_base_bdevs_operational": 2, 00:13:10.822 "base_bdevs_list": [ 00:13:10.822 { 00:13:10.822 "name": "spare", 00:13:10.822 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:10.822 "is_configured": true, 00:13:10.822 "data_offset": 0, 00:13:10.822 "data_size": 65536 00:13:10.822 }, 00:13:10.822 { 00:13:10.822 "name": "BaseBdev2", 00:13:10.822 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:10.822 "is_configured": true, 00:13:10.822 "data_offset": 0, 00:13:10.822 "data_size": 65536 00:13:10.822 } 00:13:10.822 ] 00:13:10.822 }' 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.822 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.823 "name": "raid_bdev1", 00:13:10.823 "uuid": "ec842e49-fccc-4ceb-8e8d-4af9bc60b79f", 00:13:10.823 "strip_size_kb": 0, 00:13:10.823 "state": "online", 00:13:10.823 "raid_level": "raid1", 00:13:10.823 "superblock": false, 00:13:10.823 "num_base_bdevs": 2, 00:13:10.823 "num_base_bdevs_discovered": 2, 00:13:10.823 "num_base_bdevs_operational": 2, 00:13:10.823 "base_bdevs_list": [ 00:13:10.823 { 00:13:10.823 "name": "spare", 00:13:10.823 "uuid": "99b234ef-bc21-5a24-922a-eb240c68ce71", 00:13:10.823 "is_configured": true, 00:13:10.823 "data_offset": 0, 00:13:10.823 "data_size": 65536 00:13:10.823 }, 00:13:10.823 { 00:13:10.823 "name": "BaseBdev2", 00:13:10.823 "uuid": "082c1e32-bc44-511d-823e-a755cd07e5d1", 00:13:10.823 "is_configured": true, 00:13:10.823 "data_offset": 0, 00:13:10.823 "data_size": 65536 00:13:10.823 } 00:13:10.823 ] 00:13:10.823 }' 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.823 17:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.391 [2024-11-20 17:47:38.288188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.391 [2024-11-20 17:47:38.288240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.391 [2024-11-20 17:47:38.288359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.391 [2024-11-20 17:47:38.288443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.391 [2024-11-20 17:47:38.288462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.391 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:11.651 /dev/nbd0 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.651 1+0 records in 00:13:11.651 1+0 records out 00:13:11.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367257 s, 11.2 MB/s 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.651 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:11.912 /dev/nbd1 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.912 1+0 records in 00:13:11.912 1+0 records out 00:13:11.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255453 s, 16.0 MB/s 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.912 17:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.172 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75756 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75756 ']' 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75756 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75756 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75756' 00:13:12.432 killing process with pid 75756 00:13:12.432 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75756 00:13:12.432 Received shutdown signal, test time was about 60.000000 seconds 00:13:12.432 00:13:12.433 Latency(us) 00:13:12.433 [2024-11-20T17:47:39.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.433 [2024-11-20T17:47:39.609Z] =================================================================================================================== 00:13:12.433 [2024-11-20T17:47:39.609Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:12.433 [2024-11-20 17:47:39.591986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.433 17:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75756 00:13:13.001 [2024-11-20 17:47:39.918074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.382 00:13:14.382 real 0m16.095s 00:13:14.382 user 0m17.526s 00:13:14.382 sys 0m3.393s 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.382 ************************************ 00:13:14.382 END TEST raid_rebuild_test 00:13:14.382 ************************************ 00:13:14.382 17:47:41 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:14.382 17:47:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.382 17:47:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.382 17:47:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.382 ************************************ 00:13:14.382 START TEST raid_rebuild_test_sb 00:13:14.382 ************************************ 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76186 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76186 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76186 ']' 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.382 17:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.382 Zero copy mechanism will not be used. 00:13:14.382 [2024-11-20 17:47:41.322839] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:13:14.382 [2024-11-20 17:47:41.322950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76186 ] 00:13:14.382 [2024-11-20 17:47:41.496940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.641 [2024-11-20 17:47:41.637436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.900 [2024-11-20 17:47:41.873708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.900 [2024-11-20 17:47:41.873788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.160 BaseBdev1_malloc 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.160 [2024-11-20 17:47:42.218945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.160 [2024-11-20 17:47:42.219027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.160 [2024-11-20 17:47:42.219053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.160 [2024-11-20 17:47:42.219066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.160 [2024-11-20 17:47:42.221492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.160 [2024-11-20 17:47:42.221534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.160 BaseBdev1 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.160 BaseBdev2_malloc 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.160 [2024-11-20 17:47:42.280048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.160 [2024-11-20 17:47:42.280118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.160 [2024-11-20 17:47:42.280144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.160 [2024-11-20 17:47:42.280155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.160 [2024-11-20 17:47:42.282558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.160 [2024-11-20 17:47:42.282595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.160 BaseBdev2 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.160 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 spare_malloc 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 spare_delay 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 [2024-11-20 17:47:42.364935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.421 [2024-11-20 17:47:42.365003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.421 [2024-11-20 17:47:42.365037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:15.421 [2024-11-20 17:47:42.365049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.421 [2024-11-20 17:47:42.367379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.421 [2024-11-20 17:47:42.367416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.421 spare 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 [2024-11-20 17:47:42.377012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.421 [2024-11-20 17:47:42.379069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.421 [2024-11-20 17:47:42.379250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:15.421 [2024-11-20 17:47:42.379265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.421 [2024-11-20 17:47:42.379500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:15.421 [2024-11-20 17:47:42.379684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:15.421 [2024-11-20 17:47:42.379699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:15.421 [2024-11-20 17:47:42.379843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.421 "name": "raid_bdev1", 00:13:15.421 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:15.421 "strip_size_kb": 0, 00:13:15.421 "state": "online", 00:13:15.421 "raid_level": "raid1", 00:13:15.421 "superblock": true, 00:13:15.421 "num_base_bdevs": 2, 00:13:15.421 "num_base_bdevs_discovered": 2, 00:13:15.421 "num_base_bdevs_operational": 2, 00:13:15.421 "base_bdevs_list": [ 00:13:15.421 { 00:13:15.421 "name": "BaseBdev1", 00:13:15.421 "uuid": "781c92a8-ea3d-5f2c-a55e-809d95d60d04", 00:13:15.421 "is_configured": true, 00:13:15.421 "data_offset": 2048, 00:13:15.421 "data_size": 63488 00:13:15.421 }, 00:13:15.421 { 00:13:15.421 "name": "BaseBdev2", 00:13:15.421 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:15.421 "is_configured": true, 00:13:15.421 "data_offset": 2048, 00:13:15.421 "data_size": 63488 00:13:15.421 } 00:13:15.421 ] 00:13:15.421 }' 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.421 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.681 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.681 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:15.681 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.681 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.681 [2024-11-20 17:47:42.824539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.681 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.940 17:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:16.200 [2024-11-20 17:47:43.115873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:16.200 /dev/nbd0 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.200 1+0 records in 00:13:16.200 1+0 records out 00:13:16.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402597 s, 10.2 MB/s 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:16.200 17:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:20.549 63488+0 records in 00:13:20.549 63488+0 records out 00:13:20.549 32505856 bytes (33 MB, 31 MiB) copied, 4.21158 s, 7.7 MB/s 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.549 [2024-11-20 17:47:47.650411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.549 [2024-11-20 17:47:47.669000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.549 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.550 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.550 "name": "raid_bdev1", 00:13:20.550 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:20.550 "strip_size_kb": 0, 00:13:20.550 "state": "online", 00:13:20.550 "raid_level": "raid1", 00:13:20.550 "superblock": true, 00:13:20.550 "num_base_bdevs": 2, 00:13:20.550 "num_base_bdevs_discovered": 1, 00:13:20.550 "num_base_bdevs_operational": 1, 00:13:20.550 "base_bdevs_list": [ 00:13:20.550 { 00:13:20.550 "name": null, 00:13:20.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.550 "is_configured": false, 00:13:20.550 "data_offset": 0, 00:13:20.550 "data_size": 63488 00:13:20.550 }, 00:13:20.550 { 00:13:20.550 "name": "BaseBdev2", 00:13:20.550 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:20.550 "is_configured": true, 00:13:20.550 "data_offset": 2048, 00:13:20.550 "data_size": 63488 00:13:20.550 } 00:13:20.550 ] 00:13:20.550 }' 00:13:20.809 17:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.809 17:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.069 17:47:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.069 17:47:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.069 17:47:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.069 [2024-11-20 17:47:48.164253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.069 [2024-11-20 17:47:48.182328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:21.069 17:47:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.069 17:47:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.069 [2024-11-20 17:47:48.184504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.451 "name": "raid_bdev1", 00:13:22.451 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:22.451 "strip_size_kb": 0, 00:13:22.451 "state": "online", 00:13:22.451 "raid_level": "raid1", 00:13:22.451 "superblock": true, 00:13:22.451 "num_base_bdevs": 2, 00:13:22.451 "num_base_bdevs_discovered": 2, 00:13:22.451 "num_base_bdevs_operational": 2, 00:13:22.451 "process": { 00:13:22.451 "type": "rebuild", 00:13:22.451 "target": "spare", 00:13:22.451 "progress": { 00:13:22.451 "blocks": 20480, 00:13:22.451 "percent": 32 00:13:22.451 } 00:13:22.451 }, 00:13:22.451 "base_bdevs_list": [ 00:13:22.451 { 00:13:22.451 "name": "spare", 00:13:22.451 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:22.451 "is_configured": true, 00:13:22.451 "data_offset": 2048, 00:13:22.451 "data_size": 63488 00:13:22.451 }, 00:13:22.451 { 00:13:22.451 "name": "BaseBdev2", 00:13:22.451 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:22.451 "is_configured": true, 00:13:22.451 "data_offset": 2048, 00:13:22.451 "data_size": 63488 00:13:22.451 } 00:13:22.451 ] 00:13:22.451 }' 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.451 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.452 [2024-11-20 17:47:49.329136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.452 [2024-11-20 17:47:49.394919] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.452 [2024-11-20 17:47:49.395035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.452 [2024-11-20 17:47:49.395053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.452 [2024-11-20 17:47:49.395064] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.452 "name": "raid_bdev1", 00:13:22.452 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:22.452 "strip_size_kb": 0, 00:13:22.452 "state": "online", 00:13:22.452 "raid_level": "raid1", 00:13:22.452 "superblock": true, 00:13:22.452 "num_base_bdevs": 2, 00:13:22.452 "num_base_bdevs_discovered": 1, 00:13:22.452 "num_base_bdevs_operational": 1, 00:13:22.452 "base_bdevs_list": [ 00:13:22.452 { 00:13:22.452 "name": null, 00:13:22.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.452 "is_configured": false, 00:13:22.452 "data_offset": 0, 00:13:22.452 "data_size": 63488 00:13:22.452 }, 00:13:22.452 { 00:13:22.452 "name": "BaseBdev2", 00:13:22.452 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:22.452 "is_configured": true, 00:13:22.452 "data_offset": 2048, 00:13:22.452 "data_size": 63488 00:13:22.452 } 00:13:22.452 ] 00:13:22.452 }' 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.452 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.710 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.968 17:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.968 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.968 "name": "raid_bdev1", 00:13:22.968 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:22.968 "strip_size_kb": 0, 00:13:22.968 "state": "online", 00:13:22.968 "raid_level": "raid1", 00:13:22.968 "superblock": true, 00:13:22.968 "num_base_bdevs": 2, 00:13:22.968 "num_base_bdevs_discovered": 1, 00:13:22.968 "num_base_bdevs_operational": 1, 00:13:22.968 "base_bdevs_list": [ 00:13:22.968 { 00:13:22.968 "name": null, 00:13:22.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.968 "is_configured": false, 00:13:22.968 "data_offset": 0, 00:13:22.968 "data_size": 63488 00:13:22.968 }, 00:13:22.968 { 00:13:22.968 "name": "BaseBdev2", 00:13:22.968 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:22.968 "is_configured": true, 00:13:22.968 "data_offset": 2048, 00:13:22.968 "data_size": 63488 00:13:22.968 } 00:13:22.968 ] 00:13:22.968 }' 00:13:22.968 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.968 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.968 17:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.968 17:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.968 17:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.968 17:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.968 17:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.968 [2024-11-20 17:47:50.029801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.968 [2024-11-20 17:47:50.048048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:22.968 17:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.969 17:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:22.969 [2024-11-20 17:47:50.050218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.906 17:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.166 "name": "raid_bdev1", 00:13:24.166 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:24.166 "strip_size_kb": 0, 00:13:24.166 "state": "online", 00:13:24.166 "raid_level": "raid1", 00:13:24.166 "superblock": true, 00:13:24.166 "num_base_bdevs": 2, 00:13:24.166 "num_base_bdevs_discovered": 2, 00:13:24.166 "num_base_bdevs_operational": 2, 00:13:24.166 "process": { 00:13:24.166 "type": "rebuild", 00:13:24.166 "target": "spare", 00:13:24.166 "progress": { 00:13:24.166 "blocks": 20480, 00:13:24.166 "percent": 32 00:13:24.166 } 00:13:24.166 }, 00:13:24.166 "base_bdevs_list": [ 00:13:24.166 { 00:13:24.166 "name": "spare", 00:13:24.166 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:24.166 "is_configured": true, 00:13:24.166 "data_offset": 2048, 00:13:24.166 "data_size": 63488 00:13:24.166 }, 00:13:24.166 { 00:13:24.166 "name": "BaseBdev2", 00:13:24.166 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:24.166 "is_configured": true, 00:13:24.166 "data_offset": 2048, 00:13:24.166 "data_size": 63488 00:13:24.166 } 00:13:24.166 ] 00:13:24.166 }' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:24.166 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.166 "name": "raid_bdev1", 00:13:24.166 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:24.166 "strip_size_kb": 0, 00:13:24.166 "state": "online", 00:13:24.166 "raid_level": "raid1", 00:13:24.166 "superblock": true, 00:13:24.166 "num_base_bdevs": 2, 00:13:24.166 "num_base_bdevs_discovered": 2, 00:13:24.166 "num_base_bdevs_operational": 2, 00:13:24.166 "process": { 00:13:24.166 "type": "rebuild", 00:13:24.166 "target": "spare", 00:13:24.166 "progress": { 00:13:24.166 "blocks": 22528, 00:13:24.166 "percent": 35 00:13:24.166 } 00:13:24.166 }, 00:13:24.166 "base_bdevs_list": [ 00:13:24.166 { 00:13:24.166 "name": "spare", 00:13:24.166 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:24.166 "is_configured": true, 00:13:24.166 "data_offset": 2048, 00:13:24.166 "data_size": 63488 00:13:24.166 }, 00:13:24.166 { 00:13:24.166 "name": "BaseBdev2", 00:13:24.166 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:24.166 "is_configured": true, 00:13:24.166 "data_offset": 2048, 00:13:24.166 "data_size": 63488 00:13:24.166 } 00:13:24.166 ] 00:13:24.166 }' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.166 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.426 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.426 17:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.362 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.362 "name": "raid_bdev1", 00:13:25.362 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:25.362 "strip_size_kb": 0, 00:13:25.362 "state": "online", 00:13:25.362 "raid_level": "raid1", 00:13:25.363 "superblock": true, 00:13:25.363 "num_base_bdevs": 2, 00:13:25.363 "num_base_bdevs_discovered": 2, 00:13:25.363 "num_base_bdevs_operational": 2, 00:13:25.363 "process": { 00:13:25.363 "type": "rebuild", 00:13:25.363 "target": "spare", 00:13:25.363 "progress": { 00:13:25.363 "blocks": 47104, 00:13:25.363 "percent": 74 00:13:25.363 } 00:13:25.363 }, 00:13:25.363 "base_bdevs_list": [ 00:13:25.363 { 00:13:25.363 "name": "spare", 00:13:25.363 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:25.363 "is_configured": true, 00:13:25.363 "data_offset": 2048, 00:13:25.363 "data_size": 63488 00:13:25.363 }, 00:13:25.363 { 00:13:25.363 "name": "BaseBdev2", 00:13:25.363 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:25.363 "is_configured": true, 00:13:25.363 "data_offset": 2048, 00:13:25.363 "data_size": 63488 00:13:25.363 } 00:13:25.363 ] 00:13:25.363 }' 00:13:25.363 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.363 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.363 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.363 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.363 17:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.301 [2024-11-20 17:47:53.176380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:26.301 [2024-11-20 17:47:53.176605] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:26.301 [2024-11-20 17:47:53.176779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.560 "name": "raid_bdev1", 00:13:26.560 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:26.560 "strip_size_kb": 0, 00:13:26.560 "state": "online", 00:13:26.560 "raid_level": "raid1", 00:13:26.560 "superblock": true, 00:13:26.560 "num_base_bdevs": 2, 00:13:26.560 "num_base_bdevs_discovered": 2, 00:13:26.560 "num_base_bdevs_operational": 2, 00:13:26.560 "base_bdevs_list": [ 00:13:26.560 { 00:13:26.560 "name": "spare", 00:13:26.560 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:26.560 "is_configured": true, 00:13:26.560 "data_offset": 2048, 00:13:26.560 "data_size": 63488 00:13:26.560 }, 00:13:26.560 { 00:13:26.560 "name": "BaseBdev2", 00:13:26.560 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:26.560 "is_configured": true, 00:13:26.560 "data_offset": 2048, 00:13:26.560 "data_size": 63488 00:13:26.560 } 00:13:26.560 ] 00:13:26.560 }' 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.560 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.819 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.819 "name": "raid_bdev1", 00:13:26.819 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:26.819 "strip_size_kb": 0, 00:13:26.819 "state": "online", 00:13:26.819 "raid_level": "raid1", 00:13:26.819 "superblock": true, 00:13:26.819 "num_base_bdevs": 2, 00:13:26.819 "num_base_bdevs_discovered": 2, 00:13:26.819 "num_base_bdevs_operational": 2, 00:13:26.819 "base_bdevs_list": [ 00:13:26.819 { 00:13:26.819 "name": "spare", 00:13:26.819 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 }, 00:13:26.820 { 00:13:26.820 "name": "BaseBdev2", 00:13:26.820 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 } 00:13:26.820 ] 00:13:26.820 }' 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.820 "name": "raid_bdev1", 00:13:26.820 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:26.820 "strip_size_kb": 0, 00:13:26.820 "state": "online", 00:13:26.820 "raid_level": "raid1", 00:13:26.820 "superblock": true, 00:13:26.820 "num_base_bdevs": 2, 00:13:26.820 "num_base_bdevs_discovered": 2, 00:13:26.820 "num_base_bdevs_operational": 2, 00:13:26.820 "base_bdevs_list": [ 00:13:26.820 { 00:13:26.820 "name": "spare", 00:13:26.820 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 }, 00:13:26.820 { 00:13:26.820 "name": "BaseBdev2", 00:13:26.820 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 } 00:13:26.820 ] 00:13:26.820 }' 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.820 17:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.388 [2024-11-20 17:47:54.330319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.388 [2024-11-20 17:47:54.330361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.388 [2024-11-20 17:47:54.330471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.388 [2024-11-20 17:47:54.330553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.388 [2024-11-20 17:47:54.330567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.388 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:27.648 /dev/nbd0 00:13:27.648 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.649 1+0 records in 00:13:27.649 1+0 records out 00:13:27.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260565 s, 15.7 MB/s 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.649 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:27.911 /dev/nbd1 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.911 1+0 records in 00:13:27.911 1+0 records out 00:13:27.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410416 s, 10.0 MB/s 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.911 17:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.173 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.434 [2024-11-20 17:47:55.590216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.434 [2024-11-20 17:47:55.590340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.434 [2024-11-20 17:47:55.590384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:28.434 [2024-11-20 17:47:55.590415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.434 [2024-11-20 17:47:55.592603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.434 [2024-11-20 17:47:55.592679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.434 [2024-11-20 17:47:55.592805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:28.434 [2024-11-20 17:47:55.592904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.434 [2024-11-20 17:47:55.593098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.434 spare 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.434 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.693 [2024-11-20 17:47:55.693058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:28.693 [2024-11-20 17:47:55.693099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.693 [2024-11-20 17:47:55.693434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:28.693 [2024-11-20 17:47:55.693640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:28.693 [2024-11-20 17:47:55.693651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:28.693 [2024-11-20 17:47:55.693848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.693 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.694 "name": "raid_bdev1", 00:13:28.694 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:28.694 "strip_size_kb": 0, 00:13:28.694 "state": "online", 00:13:28.694 "raid_level": "raid1", 00:13:28.694 "superblock": true, 00:13:28.694 "num_base_bdevs": 2, 00:13:28.694 "num_base_bdevs_discovered": 2, 00:13:28.694 "num_base_bdevs_operational": 2, 00:13:28.694 "base_bdevs_list": [ 00:13:28.694 { 00:13:28.694 "name": "spare", 00:13:28.694 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:28.694 "is_configured": true, 00:13:28.694 "data_offset": 2048, 00:13:28.694 "data_size": 63488 00:13:28.694 }, 00:13:28.694 { 00:13:28.694 "name": "BaseBdev2", 00:13:28.694 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:28.694 "is_configured": true, 00:13:28.694 "data_offset": 2048, 00:13:28.694 "data_size": 63488 00:13:28.694 } 00:13:28.694 ] 00:13:28.694 }' 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.694 17:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.953 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.212 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.212 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.212 "name": "raid_bdev1", 00:13:29.212 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:29.212 "strip_size_kb": 0, 00:13:29.212 "state": "online", 00:13:29.212 "raid_level": "raid1", 00:13:29.212 "superblock": true, 00:13:29.212 "num_base_bdevs": 2, 00:13:29.212 "num_base_bdevs_discovered": 2, 00:13:29.212 "num_base_bdevs_operational": 2, 00:13:29.212 "base_bdevs_list": [ 00:13:29.212 { 00:13:29.212 "name": "spare", 00:13:29.212 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:29.212 "is_configured": true, 00:13:29.212 "data_offset": 2048, 00:13:29.212 "data_size": 63488 00:13:29.213 }, 00:13:29.213 { 00:13:29.213 "name": "BaseBdev2", 00:13:29.213 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:29.213 "is_configured": true, 00:13:29.213 "data_offset": 2048, 00:13:29.213 "data_size": 63488 00:13:29.213 } 00:13:29.213 ] 00:13:29.213 }' 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.213 [2024-11-20 17:47:56.309057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.213 "name": "raid_bdev1", 00:13:29.213 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:29.213 "strip_size_kb": 0, 00:13:29.213 "state": "online", 00:13:29.213 "raid_level": "raid1", 00:13:29.213 "superblock": true, 00:13:29.213 "num_base_bdevs": 2, 00:13:29.213 "num_base_bdevs_discovered": 1, 00:13:29.213 "num_base_bdevs_operational": 1, 00:13:29.213 "base_bdevs_list": [ 00:13:29.213 { 00:13:29.213 "name": null, 00:13:29.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.213 "is_configured": false, 00:13:29.213 "data_offset": 0, 00:13:29.213 "data_size": 63488 00:13:29.213 }, 00:13:29.213 { 00:13:29.213 "name": "BaseBdev2", 00:13:29.213 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:29.213 "is_configured": true, 00:13:29.213 "data_offset": 2048, 00:13:29.213 "data_size": 63488 00:13:29.213 } 00:13:29.213 ] 00:13:29.213 }' 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.213 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.782 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.782 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 [2024-11-20 17:47:56.724406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.782 [2024-11-20 17:47:56.724626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:29.782 [2024-11-20 17:47:56.724651] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:29.782 [2024-11-20 17:47:56.724694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.782 [2024-11-20 17:47:56.741190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:29.782 17:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.782 17:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:29.782 [2024-11-20 17:47:56.743119] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.722 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.722 "name": "raid_bdev1", 00:13:30.722 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:30.722 "strip_size_kb": 0, 00:13:30.722 "state": "online", 00:13:30.722 "raid_level": "raid1", 00:13:30.722 "superblock": true, 00:13:30.722 "num_base_bdevs": 2, 00:13:30.722 "num_base_bdevs_discovered": 2, 00:13:30.722 "num_base_bdevs_operational": 2, 00:13:30.722 "process": { 00:13:30.722 "type": "rebuild", 00:13:30.722 "target": "spare", 00:13:30.722 "progress": { 00:13:30.722 "blocks": 20480, 00:13:30.722 "percent": 32 00:13:30.722 } 00:13:30.723 }, 00:13:30.723 "base_bdevs_list": [ 00:13:30.723 { 00:13:30.723 "name": "spare", 00:13:30.723 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:30.723 "is_configured": true, 00:13:30.723 "data_offset": 2048, 00:13:30.723 "data_size": 63488 00:13:30.723 }, 00:13:30.723 { 00:13:30.723 "name": "BaseBdev2", 00:13:30.723 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:30.723 "is_configured": true, 00:13:30.723 "data_offset": 2048, 00:13:30.723 "data_size": 63488 00:13:30.723 } 00:13:30.723 ] 00:13:30.723 }' 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.723 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.723 [2024-11-20 17:47:57.890785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.990 [2024-11-20 17:47:57.949172] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:30.990 [2024-11-20 17:47:57.949241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.990 [2024-11-20 17:47:57.949255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.990 [2024-11-20 17:47:57.949265] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.990 17:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.990 17:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.990 17:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.990 "name": "raid_bdev1", 00:13:30.990 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:30.990 "strip_size_kb": 0, 00:13:30.990 "state": "online", 00:13:30.990 "raid_level": "raid1", 00:13:30.990 "superblock": true, 00:13:30.990 "num_base_bdevs": 2, 00:13:30.990 "num_base_bdevs_discovered": 1, 00:13:30.990 "num_base_bdevs_operational": 1, 00:13:30.990 "base_bdevs_list": [ 00:13:30.990 { 00:13:30.990 "name": null, 00:13:30.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.990 "is_configured": false, 00:13:30.990 "data_offset": 0, 00:13:30.990 "data_size": 63488 00:13:30.990 }, 00:13:30.990 { 00:13:30.990 "name": "BaseBdev2", 00:13:30.990 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:30.990 "is_configured": true, 00:13:30.990 "data_offset": 2048, 00:13:30.990 "data_size": 63488 00:13:30.990 } 00:13:30.990 ] 00:13:30.990 }' 00:13:30.990 17:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.990 17:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.560 17:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.560 17:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.560 17:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.560 [2024-11-20 17:47:58.440237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.560 [2024-11-20 17:47:58.440316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.560 [2024-11-20 17:47:58.440343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:31.560 [2024-11-20 17:47:58.440357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.560 [2024-11-20 17:47:58.440893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.560 [2024-11-20 17:47:58.440930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.560 [2024-11-20 17:47:58.441057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:31.560 [2024-11-20 17:47:58.441081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.560 [2024-11-20 17:47:58.441096] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:31.560 [2024-11-20 17:47:58.441125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.560 [2024-11-20 17:47:58.459628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:31.560 spare 00:13:31.560 17:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.560 17:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:31.560 [2024-11-20 17:47:58.461574] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.500 "name": "raid_bdev1", 00:13:32.500 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:32.500 "strip_size_kb": 0, 00:13:32.500 "state": "online", 00:13:32.500 "raid_level": "raid1", 00:13:32.500 "superblock": true, 00:13:32.500 "num_base_bdevs": 2, 00:13:32.500 "num_base_bdevs_discovered": 2, 00:13:32.500 "num_base_bdevs_operational": 2, 00:13:32.500 "process": { 00:13:32.500 "type": "rebuild", 00:13:32.500 "target": "spare", 00:13:32.500 "progress": { 00:13:32.500 "blocks": 20480, 00:13:32.500 "percent": 32 00:13:32.500 } 00:13:32.500 }, 00:13:32.500 "base_bdevs_list": [ 00:13:32.500 { 00:13:32.500 "name": "spare", 00:13:32.500 "uuid": "0681c2ea-5a96-5d09-9b72-5281f0a028c5", 00:13:32.500 "is_configured": true, 00:13:32.500 "data_offset": 2048, 00:13:32.500 "data_size": 63488 00:13:32.500 }, 00:13:32.500 { 00:13:32.500 "name": "BaseBdev2", 00:13:32.500 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:32.500 "is_configured": true, 00:13:32.500 "data_offset": 2048, 00:13:32.500 "data_size": 63488 00:13:32.500 } 00:13:32.500 ] 00:13:32.500 }' 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.500 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.500 [2024-11-20 17:47:59.625296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.500 [2024-11-20 17:47:59.668051] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.500 [2024-11-20 17:47:59.668166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.500 [2024-11-20 17:47:59.668191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.500 [2024-11-20 17:47:59.668202] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.761 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.761 "name": "raid_bdev1", 00:13:32.761 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:32.761 "strip_size_kb": 0, 00:13:32.761 "state": "online", 00:13:32.761 "raid_level": "raid1", 00:13:32.761 "superblock": true, 00:13:32.761 "num_base_bdevs": 2, 00:13:32.761 "num_base_bdevs_discovered": 1, 00:13:32.761 "num_base_bdevs_operational": 1, 00:13:32.761 "base_bdevs_list": [ 00:13:32.761 { 00:13:32.761 "name": null, 00:13:32.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.761 "is_configured": false, 00:13:32.761 "data_offset": 0, 00:13:32.761 "data_size": 63488 00:13:32.761 }, 00:13:32.761 { 00:13:32.761 "name": "BaseBdev2", 00:13:32.762 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:32.762 "is_configured": true, 00:13:32.762 "data_offset": 2048, 00:13:32.762 "data_size": 63488 00:13:32.762 } 00:13:32.762 ] 00:13:32.762 }' 00:13:32.762 17:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.762 17:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.021 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.022 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.022 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.022 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.282 "name": "raid_bdev1", 00:13:33.282 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:33.282 "strip_size_kb": 0, 00:13:33.282 "state": "online", 00:13:33.282 "raid_level": "raid1", 00:13:33.282 "superblock": true, 00:13:33.282 "num_base_bdevs": 2, 00:13:33.282 "num_base_bdevs_discovered": 1, 00:13:33.282 "num_base_bdevs_operational": 1, 00:13:33.282 "base_bdevs_list": [ 00:13:33.282 { 00:13:33.282 "name": null, 00:13:33.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.282 "is_configured": false, 00:13:33.282 "data_offset": 0, 00:13:33.282 "data_size": 63488 00:13:33.282 }, 00:13:33.282 { 00:13:33.282 "name": "BaseBdev2", 00:13:33.282 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:33.282 "is_configured": true, 00:13:33.282 "data_offset": 2048, 00:13:33.282 "data_size": 63488 00:13:33.282 } 00:13:33.282 ] 00:13:33.282 }' 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.282 [2024-11-20 17:48:00.325122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:33.282 [2024-11-20 17:48:00.325210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.282 [2024-11-20 17:48:00.325252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:33.282 [2024-11-20 17:48:00.325279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.282 [2024-11-20 17:48:00.325858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.282 [2024-11-20 17:48:00.325888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.282 [2024-11-20 17:48:00.326007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:33.282 [2024-11-20 17:48:00.326045] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:33.282 [2024-11-20 17:48:00.326063] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:33.282 [2024-11-20 17:48:00.326078] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:33.282 BaseBdev1 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.282 17:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.223 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.224 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.224 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.224 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.224 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.224 "name": "raid_bdev1", 00:13:34.224 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:34.224 "strip_size_kb": 0, 00:13:34.224 "state": "online", 00:13:34.224 "raid_level": "raid1", 00:13:34.224 "superblock": true, 00:13:34.224 "num_base_bdevs": 2, 00:13:34.224 "num_base_bdevs_discovered": 1, 00:13:34.224 "num_base_bdevs_operational": 1, 00:13:34.224 "base_bdevs_list": [ 00:13:34.224 { 00:13:34.224 "name": null, 00:13:34.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.224 "is_configured": false, 00:13:34.224 "data_offset": 0, 00:13:34.224 "data_size": 63488 00:13:34.224 }, 00:13:34.224 { 00:13:34.224 "name": "BaseBdev2", 00:13:34.224 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:34.224 "is_configured": true, 00:13:34.224 "data_offset": 2048, 00:13:34.224 "data_size": 63488 00:13:34.224 } 00:13:34.224 ] 00:13:34.224 }' 00:13:34.224 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.224 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.792 "name": "raid_bdev1", 00:13:34.792 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:34.792 "strip_size_kb": 0, 00:13:34.792 "state": "online", 00:13:34.792 "raid_level": "raid1", 00:13:34.792 "superblock": true, 00:13:34.792 "num_base_bdevs": 2, 00:13:34.792 "num_base_bdevs_discovered": 1, 00:13:34.792 "num_base_bdevs_operational": 1, 00:13:34.792 "base_bdevs_list": [ 00:13:34.792 { 00:13:34.792 "name": null, 00:13:34.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.792 "is_configured": false, 00:13:34.792 "data_offset": 0, 00:13:34.792 "data_size": 63488 00:13:34.792 }, 00:13:34.792 { 00:13:34.792 "name": "BaseBdev2", 00:13:34.792 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:34.792 "is_configured": true, 00:13:34.792 "data_offset": 2048, 00:13:34.792 "data_size": 63488 00:13:34.792 } 00:13:34.792 ] 00:13:34.792 }' 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.792 [2024-11-20 17:48:01.946798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.792 [2024-11-20 17:48:01.947051] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.792 [2024-11-20 17:48:01.947091] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:34.792 request: 00:13:34.792 { 00:13:34.792 "base_bdev": "BaseBdev1", 00:13:34.792 "raid_bdev": "raid_bdev1", 00:13:34.792 "method": "bdev_raid_add_base_bdev", 00:13:34.792 "req_id": 1 00:13:34.792 } 00:13:34.792 Got JSON-RPC error response 00:13:34.792 response: 00:13:34.792 { 00:13:34.792 "code": -22, 00:13:34.792 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:34.792 } 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.792 17:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.175 17:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.176 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.176 "name": "raid_bdev1", 00:13:36.176 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:36.176 "strip_size_kb": 0, 00:13:36.176 "state": "online", 00:13:36.176 "raid_level": "raid1", 00:13:36.176 "superblock": true, 00:13:36.176 "num_base_bdevs": 2, 00:13:36.176 "num_base_bdevs_discovered": 1, 00:13:36.176 "num_base_bdevs_operational": 1, 00:13:36.176 "base_bdevs_list": [ 00:13:36.176 { 00:13:36.176 "name": null, 00:13:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.176 "is_configured": false, 00:13:36.176 "data_offset": 0, 00:13:36.176 "data_size": 63488 00:13:36.176 }, 00:13:36.176 { 00:13:36.176 "name": "BaseBdev2", 00:13:36.176 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:36.176 "is_configured": true, 00:13:36.176 "data_offset": 2048, 00:13:36.176 "data_size": 63488 00:13:36.176 } 00:13:36.176 ] 00:13:36.176 }' 00:13:36.176 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.176 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.436 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.437 "name": "raid_bdev1", 00:13:36.437 "uuid": "72a7b003-6934-4878-b297-08c95f7c7589", 00:13:36.437 "strip_size_kb": 0, 00:13:36.437 "state": "online", 00:13:36.437 "raid_level": "raid1", 00:13:36.437 "superblock": true, 00:13:36.437 "num_base_bdevs": 2, 00:13:36.437 "num_base_bdevs_discovered": 1, 00:13:36.437 "num_base_bdevs_operational": 1, 00:13:36.437 "base_bdevs_list": [ 00:13:36.437 { 00:13:36.437 "name": null, 00:13:36.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.437 "is_configured": false, 00:13:36.437 "data_offset": 0, 00:13:36.437 "data_size": 63488 00:13:36.437 }, 00:13:36.437 { 00:13:36.437 "name": "BaseBdev2", 00:13:36.437 "uuid": "d515d2d5-0ba5-557d-80c8-87815b1f79c0", 00:13:36.437 "is_configured": true, 00:13:36.437 "data_offset": 2048, 00:13:36.437 "data_size": 63488 00:13:36.437 } 00:13:36.437 ] 00:13:36.437 }' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76186 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76186 ']' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76186 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76186 00:13:36.437 killing process with pid 76186 00:13:36.437 Received shutdown signal, test time was about 60.000000 seconds 00:13:36.437 00:13:36.437 Latency(us) 00:13:36.437 [2024-11-20T17:48:03.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.437 [2024-11-20T17:48:03.613Z] =================================================================================================================== 00:13:36.437 [2024-11-20T17:48:03.613Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76186' 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76186 00:13:36.437 [2024-11-20 17:48:03.577776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.437 17:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76186 00:13:36.437 [2024-11-20 17:48:03.577944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.437 [2024-11-20 17:48:03.578030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.437 [2024-11-20 17:48:03.578045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:37.007 [2024-11-20 17:48:03.908667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.410 ************************************ 00:13:38.410 END TEST raid_rebuild_test_sb 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:38.410 00:13:38.410 real 0m23.916s 00:13:38.410 user 0m29.086s 00:13:38.410 sys 0m4.042s 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.410 ************************************ 00:13:38.410 17:48:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:38.410 17:48:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:38.410 17:48:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.410 17:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.410 ************************************ 00:13:38.410 START TEST raid_rebuild_test_io 00:13:38.410 ************************************ 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76917 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76917 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76917 ']' 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.410 17:48:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.410 [2024-11-20 17:48:05.316962] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:13:38.410 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:38.410 Zero copy mechanism will not be used. 00:13:38.410 [2024-11-20 17:48:05.317592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76917 ] 00:13:38.410 [2024-11-20 17:48:05.496573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.670 [2024-11-20 17:48:05.636856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.930 [2024-11-20 17:48:05.876582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.930 [2024-11-20 17:48:05.876659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.189 BaseBdev1_malloc 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.189 [2024-11-20 17:48:06.198056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:39.190 [2024-11-20 17:48:06.198128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.190 [2024-11-20 17:48:06.198154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:39.190 [2024-11-20 17:48:06.198167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.190 [2024-11-20 17:48:06.200630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.190 [2024-11-20 17:48:06.200672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.190 BaseBdev1 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.190 BaseBdev2_malloc 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.190 [2024-11-20 17:48:06.259247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:39.190 [2024-11-20 17:48:06.259315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.190 [2024-11-20 17:48:06.259340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:39.190 [2024-11-20 17:48:06.259353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.190 [2024-11-20 17:48:06.262004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.190 [2024-11-20 17:48:06.262055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:39.190 BaseBdev2 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.190 spare_malloc 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.190 spare_delay 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.190 [2024-11-20 17:48:06.345248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.190 [2024-11-20 17:48:06.345378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.190 [2024-11-20 17:48:06.345404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:39.190 [2024-11-20 17:48:06.345416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.190 [2024-11-20 17:48:06.347873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.190 [2024-11-20 17:48:06.347913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.190 spare 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.190 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.190 [2024-11-20 17:48:06.357300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.190 [2024-11-20 17:48:06.359421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.190 [2024-11-20 17:48:06.359571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:39.190 [2024-11-20 17:48:06.359589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:39.190 [2024-11-20 17:48:06.359853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:39.190 [2024-11-20 17:48:06.360046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:39.190 [2024-11-20 17:48:06.360059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:39.190 [2024-11-20 17:48:06.360217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.448 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.448 "name": "raid_bdev1", 00:13:39.448 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:39.448 "strip_size_kb": 0, 00:13:39.448 "state": "online", 00:13:39.448 "raid_level": "raid1", 00:13:39.448 "superblock": false, 00:13:39.448 "num_base_bdevs": 2, 00:13:39.448 "num_base_bdevs_discovered": 2, 00:13:39.449 "num_base_bdevs_operational": 2, 00:13:39.449 "base_bdevs_list": [ 00:13:39.449 { 00:13:39.449 "name": "BaseBdev1", 00:13:39.449 "uuid": "658daee1-d8f3-5acf-9647-1cf184abeb7e", 00:13:39.449 "is_configured": true, 00:13:39.449 "data_offset": 0, 00:13:39.449 "data_size": 65536 00:13:39.449 }, 00:13:39.449 { 00:13:39.449 "name": "BaseBdev2", 00:13:39.449 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:39.449 "is_configured": true, 00:13:39.449 "data_offset": 0, 00:13:39.449 "data_size": 65536 00:13:39.449 } 00:13:39.449 ] 00:13:39.449 }' 00:13:39.449 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.449 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.707 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.708 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.708 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.708 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:39.708 [2024-11-20 17:48:06.848914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.708 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.967 [2024-11-20 17:48:06.928383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.967 "name": "raid_bdev1", 00:13:39.967 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:39.967 "strip_size_kb": 0, 00:13:39.967 "state": "online", 00:13:39.967 "raid_level": "raid1", 00:13:39.967 "superblock": false, 00:13:39.967 "num_base_bdevs": 2, 00:13:39.967 "num_base_bdevs_discovered": 1, 00:13:39.967 "num_base_bdevs_operational": 1, 00:13:39.967 "base_bdevs_list": [ 00:13:39.967 { 00:13:39.967 "name": null, 00:13:39.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.967 "is_configured": false, 00:13:39.967 "data_offset": 0, 00:13:39.967 "data_size": 65536 00:13:39.967 }, 00:13:39.967 { 00:13:39.967 "name": "BaseBdev2", 00:13:39.967 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:39.967 "is_configured": true, 00:13:39.967 "data_offset": 0, 00:13:39.967 "data_size": 65536 00:13:39.967 } 00:13:39.967 ] 00:13:39.967 }' 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.967 17:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.967 [2024-11-20 17:48:07.021079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:39.967 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:39.967 Zero copy mechanism will not be used. 00:13:39.967 Running I/O for 60 seconds... 00:13:40.226 17:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.226 17:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.226 17:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.226 [2024-11-20 17:48:07.385288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.485 17:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.485 17:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:40.485 [2024-11-20 17:48:07.447291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:40.485 [2024-11-20 17:48:07.449669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.485 [2024-11-20 17:48:07.563867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.485 [2024-11-20 17:48:07.564990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.744 [2024-11-20 17:48:07.803804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:41.003 216.00 IOPS, 648.00 MiB/s [2024-11-20T17:48:08.179Z] [2024-11-20 17:48:08.053256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:41.261 [2024-11-20 17:48:08.276006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.261 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.262 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.520 "name": "raid_bdev1", 00:13:41.520 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:41.520 "strip_size_kb": 0, 00:13:41.520 "state": "online", 00:13:41.520 "raid_level": "raid1", 00:13:41.520 "superblock": false, 00:13:41.520 "num_base_bdevs": 2, 00:13:41.520 "num_base_bdevs_discovered": 2, 00:13:41.520 "num_base_bdevs_operational": 2, 00:13:41.520 "process": { 00:13:41.520 "type": "rebuild", 00:13:41.520 "target": "spare", 00:13:41.520 "progress": { 00:13:41.520 "blocks": 12288, 00:13:41.520 "percent": 18 00:13:41.520 } 00:13:41.520 }, 00:13:41.520 "base_bdevs_list": [ 00:13:41.520 { 00:13:41.520 "name": "spare", 00:13:41.520 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:41.520 "is_configured": true, 00:13:41.520 "data_offset": 0, 00:13:41.520 "data_size": 65536 00:13:41.520 }, 00:13:41.520 { 00:13:41.520 "name": "BaseBdev2", 00:13:41.520 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:41.520 "is_configured": true, 00:13:41.520 "data_offset": 0, 00:13:41.520 "data_size": 65536 00:13:41.520 } 00:13:41.520 ] 00:13:41.520 }' 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.520 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.520 [2024-11-20 17:48:08.571376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.520 [2024-11-20 17:48:08.653098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:41.778 [2024-11-20 17:48:08.752977] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.778 [2024-11-20 17:48:08.768520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.778 [2024-11-20 17:48:08.768601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.778 [2024-11-20 17:48:08.768624] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.778 [2024-11-20 17:48:08.809881] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.778 "name": "raid_bdev1", 00:13:41.778 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:41.778 "strip_size_kb": 0, 00:13:41.778 "state": "online", 00:13:41.778 "raid_level": "raid1", 00:13:41.778 "superblock": false, 00:13:41.778 "num_base_bdevs": 2, 00:13:41.778 "num_base_bdevs_discovered": 1, 00:13:41.778 "num_base_bdevs_operational": 1, 00:13:41.778 "base_bdevs_list": [ 00:13:41.778 { 00:13:41.778 "name": null, 00:13:41.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.778 "is_configured": false, 00:13:41.778 "data_offset": 0, 00:13:41.778 "data_size": 65536 00:13:41.778 }, 00:13:41.778 { 00:13:41.778 "name": "BaseBdev2", 00:13:41.778 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:41.778 "is_configured": true, 00:13:41.778 "data_offset": 0, 00:13:41.778 "data_size": 65536 00:13:41.778 } 00:13:41.778 ] 00:13:41.778 }' 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.778 17:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 165.50 IOPS, 496.50 MiB/s [2024-11-20T17:48:09.472Z] 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.296 "name": "raid_bdev1", 00:13:42.296 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:42.296 "strip_size_kb": 0, 00:13:42.296 "state": "online", 00:13:42.296 "raid_level": "raid1", 00:13:42.296 "superblock": false, 00:13:42.296 "num_base_bdevs": 2, 00:13:42.296 "num_base_bdevs_discovered": 1, 00:13:42.296 "num_base_bdevs_operational": 1, 00:13:42.296 "base_bdevs_list": [ 00:13:42.296 { 00:13:42.296 "name": null, 00:13:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.296 "is_configured": false, 00:13:42.296 "data_offset": 0, 00:13:42.296 "data_size": 65536 00:13:42.296 }, 00:13:42.296 { 00:13:42.296 "name": "BaseBdev2", 00:13:42.296 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:42.296 "is_configured": true, 00:13:42.296 "data_offset": 0, 00:13:42.296 "data_size": 65536 00:13:42.296 } 00:13:42.296 ] 00:13:42.296 }' 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 [2024-11-20 17:48:09.400392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.296 17:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:42.296 [2024-11-20 17:48:09.459313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:42.296 [2024-11-20 17:48:09.461519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.554 [2024-11-20 17:48:09.575613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.554 [2024-11-20 17:48:09.576482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.554 [2024-11-20 17:48:09.700317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.554 [2024-11-20 17:48:09.700875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.121 171.67 IOPS, 515.00 MiB/s [2024-11-20T17:48:10.297Z] [2024-11-20 17:48:10.042080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.121 [2024-11-20 17:48:10.192889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.380 [2024-11-20 17:48:10.434735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.380 "name": "raid_bdev1", 00:13:43.380 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:43.380 "strip_size_kb": 0, 00:13:43.380 "state": "online", 00:13:43.380 "raid_level": "raid1", 00:13:43.380 "superblock": false, 00:13:43.380 "num_base_bdevs": 2, 00:13:43.380 "num_base_bdevs_discovered": 2, 00:13:43.380 "num_base_bdevs_operational": 2, 00:13:43.380 "process": { 00:13:43.380 "type": "rebuild", 00:13:43.380 "target": "spare", 00:13:43.380 "progress": { 00:13:43.380 "blocks": 14336, 00:13:43.380 "percent": 21 00:13:43.380 } 00:13:43.380 }, 00:13:43.380 "base_bdevs_list": [ 00:13:43.380 { 00:13:43.380 "name": "spare", 00:13:43.380 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:43.380 "is_configured": true, 00:13:43.380 "data_offset": 0, 00:13:43.380 "data_size": 65536 00:13:43.380 }, 00:13:43.380 { 00:13:43.380 "name": "BaseBdev2", 00:13:43.380 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:43.380 "is_configured": true, 00:13:43.380 "data_offset": 0, 00:13:43.380 "data_size": 65536 00:13:43.380 } 00:13:43.380 ] 00:13:43.380 }' 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.380 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.676 [2024-11-20 17:48:10.569407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.676 "name": "raid_bdev1", 00:13:43.676 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:43.676 "strip_size_kb": 0, 00:13:43.676 "state": "online", 00:13:43.676 "raid_level": "raid1", 00:13:43.676 "superblock": false, 00:13:43.676 "num_base_bdevs": 2, 00:13:43.676 "num_base_bdevs_discovered": 2, 00:13:43.676 "num_base_bdevs_operational": 2, 00:13:43.676 "process": { 00:13:43.676 "type": "rebuild", 00:13:43.676 "target": "spare", 00:13:43.676 "progress": { 00:13:43.676 "blocks": 16384, 00:13:43.676 "percent": 25 00:13:43.676 } 00:13:43.676 }, 00:13:43.676 "base_bdevs_list": [ 00:13:43.676 { 00:13:43.676 "name": "spare", 00:13:43.676 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:43.676 "is_configured": true, 00:13:43.676 "data_offset": 0, 00:13:43.676 "data_size": 65536 00:13:43.676 }, 00:13:43.676 { 00:13:43.676 "name": "BaseBdev2", 00:13:43.676 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:43.676 "is_configured": true, 00:13:43.676 "data_offset": 0, 00:13:43.676 "data_size": 65536 00:13:43.676 } 00:13:43.676 ] 00:13:43.676 }' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.676 17:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.934 [2024-11-20 17:48:10.926068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:44.193 150.25 IOPS, 450.75 MiB/s [2024-11-20T17:48:11.369Z] [2024-11-20 17:48:11.251615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.762 "name": "raid_bdev1", 00:13:44.762 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:44.762 "strip_size_kb": 0, 00:13:44.762 "state": "online", 00:13:44.762 "raid_level": "raid1", 00:13:44.762 "superblock": false, 00:13:44.762 "num_base_bdevs": 2, 00:13:44.762 "num_base_bdevs_discovered": 2, 00:13:44.762 "num_base_bdevs_operational": 2, 00:13:44.762 "process": { 00:13:44.762 "type": "rebuild", 00:13:44.762 "target": "spare", 00:13:44.762 "progress": { 00:13:44.762 "blocks": 32768, 00:13:44.762 "percent": 50 00:13:44.762 } 00:13:44.762 }, 00:13:44.762 "base_bdevs_list": [ 00:13:44.762 { 00:13:44.762 "name": "spare", 00:13:44.762 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:44.762 "is_configured": true, 00:13:44.762 "data_offset": 0, 00:13:44.762 "data_size": 65536 00:13:44.762 }, 00:13:44.762 { 00:13:44.762 "name": "BaseBdev2", 00:13:44.762 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:44.762 "is_configured": true, 00:13:44.762 "data_offset": 0, 00:13:44.762 "data_size": 65536 00:13:44.762 } 00:13:44.762 ] 00:13:44.762 }' 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.762 17:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.020 [2024-11-20 17:48:11.994497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:45.589 131.60 IOPS, 394.80 MiB/s [2024-11-20T17:48:12.765Z] [2024-11-20 17:48:12.538324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.848 "name": "raid_bdev1", 00:13:45.848 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:45.848 "strip_size_kb": 0, 00:13:45.848 "state": "online", 00:13:45.848 "raid_level": "raid1", 00:13:45.848 "superblock": false, 00:13:45.848 "num_base_bdevs": 2, 00:13:45.848 "num_base_bdevs_discovered": 2, 00:13:45.848 "num_base_bdevs_operational": 2, 00:13:45.848 "process": { 00:13:45.848 "type": "rebuild", 00:13:45.848 "target": "spare", 00:13:45.848 "progress": { 00:13:45.848 "blocks": 53248, 00:13:45.848 "percent": 81 00:13:45.848 } 00:13:45.848 }, 00:13:45.848 "base_bdevs_list": [ 00:13:45.848 { 00:13:45.848 "name": "spare", 00:13:45.848 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:45.848 "is_configured": true, 00:13:45.848 "data_offset": 0, 00:13:45.848 "data_size": 65536 00:13:45.848 }, 00:13:45.848 { 00:13:45.848 "name": "BaseBdev2", 00:13:45.848 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:45.848 "is_configured": true, 00:13:45.848 "data_offset": 0, 00:13:45.848 "data_size": 65536 00:13:45.848 } 00:13:45.848 ] 00:13:45.848 }' 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.848 17:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.106 116.33 IOPS, 349.00 MiB/s [2024-11-20T17:48:13.282Z] 17:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.106 17:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.365 [2024-11-20 17:48:13.501863] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:46.623 [2024-11-20 17:48:13.601732] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:46.623 [2024-11-20 17:48:13.605756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.881 104.14 IOPS, 312.43 MiB/s [2024-11-20T17:48:14.057Z] 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.881 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.140 "name": "raid_bdev1", 00:13:47.140 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:47.140 "strip_size_kb": 0, 00:13:47.140 "state": "online", 00:13:47.140 "raid_level": "raid1", 00:13:47.140 "superblock": false, 00:13:47.140 "num_base_bdevs": 2, 00:13:47.140 "num_base_bdevs_discovered": 2, 00:13:47.140 "num_base_bdevs_operational": 2, 00:13:47.140 "base_bdevs_list": [ 00:13:47.140 { 00:13:47.140 "name": "spare", 00:13:47.140 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:47.140 "is_configured": true, 00:13:47.140 "data_offset": 0, 00:13:47.140 "data_size": 65536 00:13:47.140 }, 00:13:47.140 { 00:13:47.140 "name": "BaseBdev2", 00:13:47.140 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:47.140 "is_configured": true, 00:13:47.140 "data_offset": 0, 00:13:47.140 "data_size": 65536 00:13:47.140 } 00:13:47.140 ] 00:13:47.140 }' 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.140 "name": "raid_bdev1", 00:13:47.140 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:47.140 "strip_size_kb": 0, 00:13:47.140 "state": "online", 00:13:47.140 "raid_level": "raid1", 00:13:47.140 "superblock": false, 00:13:47.140 "num_base_bdevs": 2, 00:13:47.140 "num_base_bdevs_discovered": 2, 00:13:47.140 "num_base_bdevs_operational": 2, 00:13:47.140 "base_bdevs_list": [ 00:13:47.140 { 00:13:47.140 "name": "spare", 00:13:47.140 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:47.140 "is_configured": true, 00:13:47.140 "data_offset": 0, 00:13:47.140 "data_size": 65536 00:13:47.140 }, 00:13:47.140 { 00:13:47.140 "name": "BaseBdev2", 00:13:47.140 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:47.140 "is_configured": true, 00:13:47.140 "data_offset": 0, 00:13:47.140 "data_size": 65536 00:13:47.140 } 00:13:47.140 ] 00:13:47.140 }' 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.140 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.399 "name": "raid_bdev1", 00:13:47.399 "uuid": "1caad729-544e-43fa-bab8-f64fd4d5b3c8", 00:13:47.399 "strip_size_kb": 0, 00:13:47.399 "state": "online", 00:13:47.399 "raid_level": "raid1", 00:13:47.399 "superblock": false, 00:13:47.399 "num_base_bdevs": 2, 00:13:47.399 "num_base_bdevs_discovered": 2, 00:13:47.399 "num_base_bdevs_operational": 2, 00:13:47.399 "base_bdevs_list": [ 00:13:47.399 { 00:13:47.399 "name": "spare", 00:13:47.399 "uuid": "aaa63a57-a272-520b-98b1-30acb83c1684", 00:13:47.399 "is_configured": true, 00:13:47.399 "data_offset": 0, 00:13:47.399 "data_size": 65536 00:13:47.399 }, 00:13:47.399 { 00:13:47.399 "name": "BaseBdev2", 00:13:47.399 "uuid": "d07d0e35-711f-57fa-aece-bec5fc3b230c", 00:13:47.399 "is_configured": true, 00:13:47.399 "data_offset": 0, 00:13:47.399 "data_size": 65536 00:13:47.399 } 00:13:47.399 ] 00:13:47.399 }' 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.399 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.657 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.657 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.657 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.657 [2024-11-20 17:48:14.762912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.657 [2024-11-20 17:48:14.762992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.658 00:13:47.658 Latency(us) 00:13:47.658 [2024-11-20T17:48:14.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.658 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:47.658 raid_bdev1 : 7.80 98.30 294.91 0.00 0.00 14616.53 338.05 115847.04 00:13:47.658 [2024-11-20T17:48:14.834Z] =================================================================================================================== 00:13:47.658 [2024-11-20T17:48:14.834Z] Total : 98.30 294.91 0.00 0.00 14616.53 338.05 115847.04 00:13:47.916 [2024-11-20 17:48:14.834021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.916 [2024-11-20 17:48:14.834120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.916 [2024-11-20 17:48:14.834231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.916 [2024-11-20 17:48:14.834296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:47.916 { 00:13:47.916 "results": [ 00:13:47.916 { 00:13:47.916 "job": "raid_bdev1", 00:13:47.916 "core_mask": "0x1", 00:13:47.916 "workload": "randrw", 00:13:47.916 "percentage": 50, 00:13:47.916 "status": "finished", 00:13:47.916 "queue_depth": 2, 00:13:47.916 "io_size": 3145728, 00:13:47.916 "runtime": 7.80231, 00:13:47.916 "iops": 98.30422016043967, 00:13:47.916 "mibps": 294.912660481319, 00:13:47.916 "io_failed": 0, 00:13:47.916 "io_timeout": 0, 00:13:47.916 "avg_latency_us": 14616.527132877485, 00:13:47.916 "min_latency_us": 338.05414847161575, 00:13:47.916 "max_latency_us": 115847.04279475982 00:13:47.916 } 00:13:47.916 ], 00:13:47.916 "core_count": 1 00:13:47.916 } 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.916 17:48:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:47.916 /dev/nbd0 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.194 1+0 records in 00:13:48.194 1+0 records out 00:13:48.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593626 s, 6.9 MB/s 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:48.194 /dev/nbd1 00:13:48.194 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.452 1+0 records in 00:13:48.452 1+0 records out 00:13:48.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041404 s, 9.9 MB/s 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:48.452 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:48.453 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.453 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:48.453 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.453 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:48.453 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.453 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.711 17:48:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.969 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76917 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76917 ']' 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76917 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76917 00:13:48.970 killing process with pid 76917 00:13:48.970 Received shutdown signal, test time was about 9.085484 seconds 00:13:48.970 00:13:48.970 Latency(us) 00:13:48.970 [2024-11-20T17:48:16.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.970 [2024-11-20T17:48:16.146Z] =================================================================================================================== 00:13:48.970 [2024-11-20T17:48:16.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76917' 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76917 00:13:48.970 [2024-11-20 17:48:16.091455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.970 17:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76917 00:13:49.228 [2024-11-20 17:48:16.342165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:50.609 00:13:50.609 real 0m12.424s 00:13:50.609 user 0m15.466s 00:13:50.609 sys 0m1.646s 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.609 ************************************ 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.609 END TEST raid_rebuild_test_io 00:13:50.609 ************************************ 00:13:50.609 17:48:17 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:50.609 17:48:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:50.609 17:48:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.609 17:48:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.609 ************************************ 00:13:50.609 START TEST raid_rebuild_test_sb_io 00:13:50.609 ************************************ 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77295 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77295 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77295 ']' 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.609 17:48:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.869 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:50.869 Zero copy mechanism will not be used. 00:13:50.869 [2024-11-20 17:48:17.807826] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:13:50.869 [2024-11-20 17:48:17.807933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77295 ] 00:13:50.869 [2024-11-20 17:48:17.982221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.129 [2024-11-20 17:48:18.122664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.391 [2024-11-20 17:48:18.354154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.391 [2024-11-20 17:48:18.354222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.651 BaseBdev1_malloc 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:51.651 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.652 [2024-11-20 17:48:18.686605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:51.652 [2024-11-20 17:48:18.686684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.652 [2024-11-20 17:48:18.686707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:51.652 [2024-11-20 17:48:18.686719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.652 [2024-11-20 17:48:18.689150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.652 [2024-11-20 17:48:18.689186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.652 BaseBdev1 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.652 BaseBdev2_malloc 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.652 [2024-11-20 17:48:18.747534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:51.652 [2024-11-20 17:48:18.747597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.652 [2024-11-20 17:48:18.747621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:51.652 [2024-11-20 17:48:18.747633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.652 [2024-11-20 17:48:18.750076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.652 [2024-11-20 17:48:18.750111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.652 BaseBdev2 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.652 spare_malloc 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.652 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.912 spare_delay 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.912 [2024-11-20 17:48:18.836407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.912 [2024-11-20 17:48:18.836468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.912 [2024-11-20 17:48:18.836487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:51.912 [2024-11-20 17:48:18.836499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.912 [2024-11-20 17:48:18.838911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.912 [2024-11-20 17:48:18.838948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.912 spare 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.912 [2024-11-20 17:48:18.848473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.912 [2024-11-20 17:48:18.850580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.912 [2024-11-20 17:48:18.850780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:51.912 [2024-11-20 17:48:18.850795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.912 [2024-11-20 17:48:18.851047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:51.912 [2024-11-20 17:48:18.851250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:51.912 [2024-11-20 17:48:18.851268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:51.912 [2024-11-20 17:48:18.851438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.912 "name": "raid_bdev1", 00:13:51.912 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:51.912 "strip_size_kb": 0, 00:13:51.912 "state": "online", 00:13:51.912 "raid_level": "raid1", 00:13:51.912 "superblock": true, 00:13:51.912 "num_base_bdevs": 2, 00:13:51.912 "num_base_bdevs_discovered": 2, 00:13:51.912 "num_base_bdevs_operational": 2, 00:13:51.912 "base_bdevs_list": [ 00:13:51.912 { 00:13:51.912 "name": "BaseBdev1", 00:13:51.912 "uuid": "a04b3968-3328-5ffb-86d1-76c44cb4ba74", 00:13:51.912 "is_configured": true, 00:13:51.912 "data_offset": 2048, 00:13:51.912 "data_size": 63488 00:13:51.912 }, 00:13:51.912 { 00:13:51.912 "name": "BaseBdev2", 00:13:51.912 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:51.912 "is_configured": true, 00:13:51.912 "data_offset": 2048, 00:13:51.912 "data_size": 63488 00:13:51.912 } 00:13:51.912 ] 00:13:51.912 }' 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.912 17:48:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.482 [2024-11-20 17:48:19.359888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.482 [2024-11-20 17:48:19.455444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.482 "name": "raid_bdev1", 00:13:52.482 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:52.482 "strip_size_kb": 0, 00:13:52.482 "state": "online", 00:13:52.482 "raid_level": "raid1", 00:13:52.482 "superblock": true, 00:13:52.482 "num_base_bdevs": 2, 00:13:52.482 "num_base_bdevs_discovered": 1, 00:13:52.482 "num_base_bdevs_operational": 1, 00:13:52.482 "base_bdevs_list": [ 00:13:52.482 { 00:13:52.482 "name": null, 00:13:52.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.482 "is_configured": false, 00:13:52.482 "data_offset": 0, 00:13:52.482 "data_size": 63488 00:13:52.482 }, 00:13:52.482 { 00:13:52.482 "name": "BaseBdev2", 00:13:52.482 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:52.482 "is_configured": true, 00:13:52.482 "data_offset": 2048, 00:13:52.482 "data_size": 63488 00:13:52.482 } 00:13:52.482 ] 00:13:52.482 }' 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.482 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.483 [2024-11-20 17:48:19.552357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:52.483 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.483 Zero copy mechanism will not be used. 00:13:52.483 Running I/O for 60 seconds... 00:13:52.742 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.742 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.742 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.742 [2024-11-20 17:48:19.877477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.001 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.001 17:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:53.001 [2024-11-20 17:48:19.946000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:53.001 [2024-11-20 17:48:19.948267] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.001 [2024-11-20 17:48:20.067983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.001 [2024-11-20 17:48:20.068960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.261 [2024-11-20 17:48:20.180921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.261 [2024-11-20 17:48:20.181483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.261 [2024-11-20 17:48:20.405597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:53.261 [2024-11-20 17:48:20.406454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:53.521 207.00 IOPS, 621.00 MiB/s [2024-11-20T17:48:20.697Z] [2024-11-20 17:48:20.617385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:53.781 [2024-11-20 17:48:20.943906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.781 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.041 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.041 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.041 "name": "raid_bdev1", 00:13:54.041 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:54.041 "strip_size_kb": 0, 00:13:54.041 "state": "online", 00:13:54.041 "raid_level": "raid1", 00:13:54.041 "superblock": true, 00:13:54.041 "num_base_bdevs": 2, 00:13:54.041 "num_base_bdevs_discovered": 2, 00:13:54.041 "num_base_bdevs_operational": 2, 00:13:54.041 "process": { 00:13:54.041 "type": "rebuild", 00:13:54.041 "target": "spare", 00:13:54.041 "progress": { 00:13:54.041 "blocks": 14336, 00:13:54.041 "percent": 22 00:13:54.041 } 00:13:54.041 }, 00:13:54.041 "base_bdevs_list": [ 00:13:54.041 { 00:13:54.041 "name": "spare", 00:13:54.041 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:54.041 "is_configured": true, 00:13:54.041 "data_offset": 2048, 00:13:54.041 "data_size": 63488 00:13:54.041 }, 00:13:54.041 { 00:13:54.041 "name": "BaseBdev2", 00:13:54.041 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:54.041 "is_configured": true, 00:13:54.041 "data_offset": 2048, 00:13:54.041 "data_size": 63488 00:13:54.041 } 00:13:54.041 ] 00:13:54.041 }' 00:13:54.041 17:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.041 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.041 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.041 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.041 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:54.041 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.041 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.041 [2024-11-20 17:48:21.060474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.041 [2024-11-20 17:48:21.070907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:54.041 [2024-11-20 17:48:21.173020] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.041 [2024-11-20 17:48:21.178358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.041 [2024-11-20 17:48:21.178420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.041 [2024-11-20 17:48:21.178435] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.041 [2024-11-20 17:48:21.212307] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:54.301 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.301 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.301 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.301 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.301 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.301 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.302 "name": "raid_bdev1", 00:13:54.302 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:54.302 "strip_size_kb": 0, 00:13:54.302 "state": "online", 00:13:54.302 "raid_level": "raid1", 00:13:54.302 "superblock": true, 00:13:54.302 "num_base_bdevs": 2, 00:13:54.302 "num_base_bdevs_discovered": 1, 00:13:54.302 "num_base_bdevs_operational": 1, 00:13:54.302 "base_bdevs_list": [ 00:13:54.302 { 00:13:54.302 "name": null, 00:13:54.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.302 "is_configured": false, 00:13:54.302 "data_offset": 0, 00:13:54.302 "data_size": 63488 00:13:54.302 }, 00:13:54.302 { 00:13:54.302 "name": "BaseBdev2", 00:13:54.302 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:54.302 "is_configured": true, 00:13:54.302 "data_offset": 2048, 00:13:54.302 "data_size": 63488 00:13:54.302 } 00:13:54.302 ] 00:13:54.302 }' 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.302 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.562 187.00 IOPS, 561.00 MiB/s [2024-11-20T17:48:21.738Z] 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.562 "name": "raid_bdev1", 00:13:54.562 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:54.562 "strip_size_kb": 0, 00:13:54.562 "state": "online", 00:13:54.562 "raid_level": "raid1", 00:13:54.562 "superblock": true, 00:13:54.562 "num_base_bdevs": 2, 00:13:54.562 "num_base_bdevs_discovered": 1, 00:13:54.562 "num_base_bdevs_operational": 1, 00:13:54.562 "base_bdevs_list": [ 00:13:54.562 { 00:13:54.562 "name": null, 00:13:54.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.562 "is_configured": false, 00:13:54.562 "data_offset": 0, 00:13:54.562 "data_size": 63488 00:13:54.562 }, 00:13:54.562 { 00:13:54.562 "name": "BaseBdev2", 00:13:54.562 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:54.562 "is_configured": true, 00:13:54.562 "data_offset": 2048, 00:13:54.562 "data_size": 63488 00:13:54.562 } 00:13:54.562 ] 00:13:54.562 }' 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.562 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.822 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.822 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.822 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.822 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.822 [2024-11-20 17:48:21.788981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.822 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.822 17:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:54.822 [2024-11-20 17:48:21.845508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:54.822 [2024-11-20 17:48:21.847805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.822 [2024-11-20 17:48:21.957983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:54.822 [2024-11-20 17:48:21.958966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.082 [2024-11-20 17:48:22.088827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.082 [2024-11-20 17:48:22.089333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.342 [2024-11-20 17:48:22.341596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:55.342 [2024-11-20 17:48:22.463334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:55.862 180.33 IOPS, 541.00 MiB/s [2024-11-20T17:48:23.038Z] [2024-11-20 17:48:22.816446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.862 "name": "raid_bdev1", 00:13:55.862 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:55.862 "strip_size_kb": 0, 00:13:55.862 "state": "online", 00:13:55.862 "raid_level": "raid1", 00:13:55.862 "superblock": true, 00:13:55.862 "num_base_bdevs": 2, 00:13:55.862 "num_base_bdevs_discovered": 2, 00:13:55.862 "num_base_bdevs_operational": 2, 00:13:55.862 "process": { 00:13:55.862 "type": "rebuild", 00:13:55.862 "target": "spare", 00:13:55.862 "progress": { 00:13:55.862 "blocks": 16384, 00:13:55.862 "percent": 25 00:13:55.862 } 00:13:55.862 }, 00:13:55.862 "base_bdevs_list": [ 00:13:55.862 { 00:13:55.862 "name": "spare", 00:13:55.862 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:55.862 "is_configured": true, 00:13:55.862 "data_offset": 2048, 00:13:55.862 "data_size": 63488 00:13:55.862 }, 00:13:55.862 { 00:13:55.862 "name": "BaseBdev2", 00:13:55.862 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:55.862 "is_configured": true, 00:13:55.862 "data_offset": 2048, 00:13:55.862 "data_size": 63488 00:13:55.862 } 00:13:55.862 ] 00:13:55.862 }' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:55.862 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.862 17:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.862 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.862 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.862 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.122 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.122 "name": "raid_bdev1", 00:13:56.122 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:56.122 "strip_size_kb": 0, 00:13:56.122 "state": "online", 00:13:56.122 "raid_level": "raid1", 00:13:56.122 "superblock": true, 00:13:56.122 "num_base_bdevs": 2, 00:13:56.122 "num_base_bdevs_discovered": 2, 00:13:56.122 "num_base_bdevs_operational": 2, 00:13:56.122 "process": { 00:13:56.122 "type": "rebuild", 00:13:56.122 "target": "spare", 00:13:56.122 "progress": { 00:13:56.122 "blocks": 16384, 00:13:56.122 "percent": 25 00:13:56.122 } 00:13:56.122 }, 00:13:56.122 "base_bdevs_list": [ 00:13:56.122 { 00:13:56.122 "name": "spare", 00:13:56.122 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:56.122 "is_configured": true, 00:13:56.122 "data_offset": 2048, 00:13:56.122 "data_size": 63488 00:13:56.122 }, 00:13:56.122 { 00:13:56.122 "name": "BaseBdev2", 00:13:56.122 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:56.122 "is_configured": true, 00:13:56.122 "data_offset": 2048, 00:13:56.122 "data_size": 63488 00:13:56.122 } 00:13:56.122 ] 00:13:56.122 }' 00:13:56.122 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.122 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.122 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.122 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.122 17:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.382 [2024-11-20 17:48:23.500517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:57.360 150.25 IOPS, 450.75 MiB/s [2024-11-20T17:48:24.536Z] 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.360 [2024-11-20 17:48:24.187795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:57.360 [2024-11-20 17:48:24.188620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:57.360 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.360 "name": "raid_bdev1", 00:13:57.360 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:57.360 "strip_size_kb": 0, 00:13:57.360 "state": "online", 00:13:57.360 "raid_level": "raid1", 00:13:57.360 "superblock": true, 00:13:57.360 "num_base_bdevs": 2, 00:13:57.360 "num_base_bdevs_discovered": 2, 00:13:57.360 "num_base_bdevs_operational": 2, 00:13:57.360 "process": { 00:13:57.360 "type": "rebuild", 00:13:57.360 "target": "spare", 00:13:57.360 "progress": { 00:13:57.360 "blocks": 36864, 00:13:57.360 "percent": 58 00:13:57.360 } 00:13:57.360 }, 00:13:57.361 "base_bdevs_list": [ 00:13:57.361 { 00:13:57.361 "name": "spare", 00:13:57.361 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 2048, 00:13:57.361 "data_size": 63488 00:13:57.361 }, 00:13:57.361 { 00:13:57.361 "name": "BaseBdev2", 00:13:57.361 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 2048, 00:13:57.361 "data_size": 63488 00:13:57.361 } 00:13:57.361 ] 00:13:57.361 }' 00:13:57.361 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.361 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.361 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.361 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.361 17:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.361 [2024-11-20 17:48:24.415728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:57.620 129.00 IOPS, 387.00 MiB/s [2024-11-20T17:48:24.796Z] [2024-11-20 17:48:24.645032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:57.880 [2024-11-20 17:48:24.877508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:57.880 [2024-11-20 17:48:24.878059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.139 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.398 [2024-11-20 17:48:25.315340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.398 "name": "raid_bdev1", 00:13:58.398 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:58.398 "strip_size_kb": 0, 00:13:58.398 "state": "online", 00:13:58.398 "raid_level": "raid1", 00:13:58.398 "superblock": true, 00:13:58.398 "num_base_bdevs": 2, 00:13:58.398 "num_base_bdevs_discovered": 2, 00:13:58.398 "num_base_bdevs_operational": 2, 00:13:58.398 "process": { 00:13:58.398 "type": "rebuild", 00:13:58.398 "target": "spare", 00:13:58.398 "progress": { 00:13:58.398 "blocks": 51200, 00:13:58.398 "percent": 80 00:13:58.398 } 00:13:58.398 }, 00:13:58.398 "base_bdevs_list": [ 00:13:58.398 { 00:13:58.398 "name": "spare", 00:13:58.398 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:58.398 "is_configured": true, 00:13:58.398 "data_offset": 2048, 00:13:58.398 "data_size": 63488 00:13:58.398 }, 00:13:58.398 { 00:13:58.398 "name": "BaseBdev2", 00:13:58.398 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:58.398 "is_configured": true, 00:13:58.398 "data_offset": 2048, 00:13:58.398 "data_size": 63488 00:13:58.398 } 00:13:58.398 ] 00:13:58.398 }' 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.398 17:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.398 [2024-11-20 17:48:25.540456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:58.964 112.83 IOPS, 338.50 MiB/s [2024-11-20T17:48:26.140Z] [2024-11-20 17:48:25.986251] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:58.964 [2024-11-20 17:48:26.091572] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:58.964 [2024-11-20 17:48:26.095665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.534 "name": "raid_bdev1", 00:13:59.534 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:59.534 "strip_size_kb": 0, 00:13:59.534 "state": "online", 00:13:59.534 "raid_level": "raid1", 00:13:59.534 "superblock": true, 00:13:59.534 "num_base_bdevs": 2, 00:13:59.534 "num_base_bdevs_discovered": 2, 00:13:59.534 "num_base_bdevs_operational": 2, 00:13:59.534 "base_bdevs_list": [ 00:13:59.534 { 00:13:59.534 "name": "spare", 00:13:59.534 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:59.534 "is_configured": true, 00:13:59.534 "data_offset": 2048, 00:13:59.534 "data_size": 63488 00:13:59.534 }, 00:13:59.534 { 00:13:59.534 "name": "BaseBdev2", 00:13:59.534 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:59.534 "is_configured": true, 00:13:59.534 "data_offset": 2048, 00:13:59.534 "data_size": 63488 00:13:59.534 } 00:13:59.534 ] 00:13:59.534 }' 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.534 102.14 IOPS, 306.43 MiB/s [2024-11-20T17:48:26.710Z] 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.534 "name": "raid_bdev1", 00:13:59.534 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:59.534 "strip_size_kb": 0, 00:13:59.534 "state": "online", 00:13:59.534 "raid_level": "raid1", 00:13:59.534 "superblock": true, 00:13:59.534 "num_base_bdevs": 2, 00:13:59.534 "num_base_bdevs_discovered": 2, 00:13:59.534 "num_base_bdevs_operational": 2, 00:13:59.534 "base_bdevs_list": [ 00:13:59.534 { 00:13:59.534 "name": "spare", 00:13:59.534 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:59.534 "is_configured": true, 00:13:59.534 "data_offset": 2048, 00:13:59.534 "data_size": 63488 00:13:59.534 }, 00:13:59.534 { 00:13:59.534 "name": "BaseBdev2", 00:13:59.534 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:59.534 "is_configured": true, 00:13:59.534 "data_offset": 2048, 00:13:59.534 "data_size": 63488 00:13:59.534 } 00:13:59.534 ] 00:13:59.534 }' 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.534 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.795 "name": "raid_bdev1", 00:13:59.795 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:13:59.795 "strip_size_kb": 0, 00:13:59.795 "state": "online", 00:13:59.795 "raid_level": "raid1", 00:13:59.795 "superblock": true, 00:13:59.795 "num_base_bdevs": 2, 00:13:59.795 "num_base_bdevs_discovered": 2, 00:13:59.795 "num_base_bdevs_operational": 2, 00:13:59.795 "base_bdevs_list": [ 00:13:59.795 { 00:13:59.795 "name": "spare", 00:13:59.795 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:13:59.795 "is_configured": true, 00:13:59.795 "data_offset": 2048, 00:13:59.795 "data_size": 63488 00:13:59.795 }, 00:13:59.795 { 00:13:59.795 "name": "BaseBdev2", 00:13:59.795 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:13:59.795 "is_configured": true, 00:13:59.795 "data_offset": 2048, 00:13:59.795 "data_size": 63488 00:13:59.795 } 00:13:59.795 ] 00:13:59.795 }' 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.795 17:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.055 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.055 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.055 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.055 [2024-11-20 17:48:27.145801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.055 [2024-11-20 17:48:27.145912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.055 00:14:00.055 Latency(us) 00:14:00.055 [2024-11-20T17:48:27.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.055 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:00.055 raid_bdev1 : 7.68 96.25 288.76 0.00 0.00 15069.80 305.86 114015.47 00:14:00.055 [2024-11-20T17:48:27.231Z] =================================================================================================================== 00:14:00.055 [2024-11-20T17:48:27.231Z] Total : 96.25 288.76 0.00 0.00 15069.80 305.86 114015.47 00:14:00.315 { 00:14:00.315 "results": [ 00:14:00.315 { 00:14:00.315 "job": "raid_bdev1", 00:14:00.315 "core_mask": "0x1", 00:14:00.315 "workload": "randrw", 00:14:00.315 "percentage": 50, 00:14:00.315 "status": "finished", 00:14:00.315 "queue_depth": 2, 00:14:00.315 "io_size": 3145728, 00:14:00.315 "runtime": 7.677599, 00:14:00.315 "iops": 96.25405025711814, 00:14:00.315 "mibps": 288.7621507713544, 00:14:00.315 "io_failed": 0, 00:14:00.315 "io_timeout": 0, 00:14:00.315 "avg_latency_us": 15069.804846629753, 00:14:00.315 "min_latency_us": 305.8585152838428, 00:14:00.315 "max_latency_us": 114015.46899563319 00:14:00.315 } 00:14:00.315 ], 00:14:00.315 "core_count": 1 00:14:00.315 } 00:14:00.315 [2024-11-20 17:48:27.240828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.315 [2024-11-20 17:48:27.240909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.315 [2024-11-20 17:48:27.241007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.315 [2024-11-20 17:48:27.241042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.315 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:00.574 /dev/nbd0 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.574 1+0 records in 00:14:00.574 1+0 records out 00:14:00.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661684 s, 6.2 MB/s 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:00.574 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.575 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:00.575 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.575 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.575 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.575 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.575 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:00.834 /dev/nbd1 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.835 1+0 records in 00:14:00.835 1+0 records out 00:14:00.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496099 s, 8.3 MB/s 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.835 17:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.096 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.356 [2024-11-20 17:48:28.430432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.356 [2024-11-20 17:48:28.430495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.356 [2024-11-20 17:48:28.430520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:01.356 [2024-11-20 17:48:28.430532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.356 [2024-11-20 17:48:28.433130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.356 [2024-11-20 17:48:28.433173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.356 [2024-11-20 17:48:28.433272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:01.356 [2024-11-20 17:48:28.433330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.356 [2024-11-20 17:48:28.433487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.356 spare 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.356 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.616 [2024-11-20 17:48:28.533439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:01.616 [2024-11-20 17:48:28.533511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.616 [2024-11-20 17:48:28.533951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:01.616 [2024-11-20 17:48:28.534229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:01.616 [2024-11-20 17:48:28.534253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:01.616 [2024-11-20 17:48:28.534501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.616 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.616 "name": "raid_bdev1", 00:14:01.616 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:01.616 "strip_size_kb": 0, 00:14:01.616 "state": "online", 00:14:01.616 "raid_level": "raid1", 00:14:01.616 "superblock": true, 00:14:01.617 "num_base_bdevs": 2, 00:14:01.617 "num_base_bdevs_discovered": 2, 00:14:01.617 "num_base_bdevs_operational": 2, 00:14:01.617 "base_bdevs_list": [ 00:14:01.617 { 00:14:01.617 "name": "spare", 00:14:01.617 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:14:01.617 "is_configured": true, 00:14:01.617 "data_offset": 2048, 00:14:01.617 "data_size": 63488 00:14:01.617 }, 00:14:01.617 { 00:14:01.617 "name": "BaseBdev2", 00:14:01.617 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:01.617 "is_configured": true, 00:14:01.617 "data_offset": 2048, 00:14:01.617 "data_size": 63488 00:14:01.617 } 00:14:01.617 ] 00:14:01.617 }' 00:14:01.617 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.617 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.876 17:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.876 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.876 "name": "raid_bdev1", 00:14:01.876 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:01.876 "strip_size_kb": 0, 00:14:01.876 "state": "online", 00:14:01.876 "raid_level": "raid1", 00:14:01.876 "superblock": true, 00:14:01.876 "num_base_bdevs": 2, 00:14:01.876 "num_base_bdevs_discovered": 2, 00:14:01.876 "num_base_bdevs_operational": 2, 00:14:01.876 "base_bdevs_list": [ 00:14:01.876 { 00:14:01.876 "name": "spare", 00:14:01.876 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:14:01.876 "is_configured": true, 00:14:01.876 "data_offset": 2048, 00:14:01.876 "data_size": 63488 00:14:01.876 }, 00:14:01.876 { 00:14:01.876 "name": "BaseBdev2", 00:14:01.876 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:01.876 "is_configured": true, 00:14:01.876 "data_offset": 2048, 00:14:01.876 "data_size": 63488 00:14:01.876 } 00:14:01.876 ] 00:14:01.876 }' 00:14:01.876 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.876 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.876 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.135 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.135 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.136 [2024-11-20 17:48:29.129495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.136 "name": "raid_bdev1", 00:14:02.136 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:02.136 "strip_size_kb": 0, 00:14:02.136 "state": "online", 00:14:02.136 "raid_level": "raid1", 00:14:02.136 "superblock": true, 00:14:02.136 "num_base_bdevs": 2, 00:14:02.136 "num_base_bdevs_discovered": 1, 00:14:02.136 "num_base_bdevs_operational": 1, 00:14:02.136 "base_bdevs_list": [ 00:14:02.136 { 00:14:02.136 "name": null, 00:14:02.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.136 "is_configured": false, 00:14:02.136 "data_offset": 0, 00:14:02.136 "data_size": 63488 00:14:02.136 }, 00:14:02.136 { 00:14:02.136 "name": "BaseBdev2", 00:14:02.136 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:02.136 "is_configured": true, 00:14:02.136 "data_offset": 2048, 00:14:02.136 "data_size": 63488 00:14:02.136 } 00:14:02.136 ] 00:14:02.136 }' 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.136 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.704 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.704 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.704 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.704 [2024-11-20 17:48:29.616895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.704 [2024-11-20 17:48:29.617162] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.704 [2024-11-20 17:48:29.617181] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:02.704 [2024-11-20 17:48:29.617229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.704 [2024-11-20 17:48:29.635250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:02.704 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.704 17:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:02.704 [2024-11-20 17:48:29.637415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.643 "name": "raid_bdev1", 00:14:03.643 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:03.643 "strip_size_kb": 0, 00:14:03.643 "state": "online", 00:14:03.643 "raid_level": "raid1", 00:14:03.643 "superblock": true, 00:14:03.643 "num_base_bdevs": 2, 00:14:03.643 "num_base_bdevs_discovered": 2, 00:14:03.643 "num_base_bdevs_operational": 2, 00:14:03.643 "process": { 00:14:03.643 "type": "rebuild", 00:14:03.643 "target": "spare", 00:14:03.643 "progress": { 00:14:03.643 "blocks": 20480, 00:14:03.643 "percent": 32 00:14:03.643 } 00:14:03.643 }, 00:14:03.643 "base_bdevs_list": [ 00:14:03.643 { 00:14:03.643 "name": "spare", 00:14:03.643 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:14:03.643 "is_configured": true, 00:14:03.643 "data_offset": 2048, 00:14:03.643 "data_size": 63488 00:14:03.643 }, 00:14:03.643 { 00:14:03.643 "name": "BaseBdev2", 00:14:03.643 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:03.643 "is_configured": true, 00:14:03.643 "data_offset": 2048, 00:14:03.643 "data_size": 63488 00:14:03.643 } 00:14:03.643 ] 00:14:03.643 }' 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.643 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.643 [2024-11-20 17:48:30.785742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.903 [2024-11-20 17:48:30.847477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.903 [2024-11-20 17:48:30.847587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.903 [2024-11-20 17:48:30.847606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.903 [2024-11-20 17:48:30.847614] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.903 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.903 "name": "raid_bdev1", 00:14:03.903 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:03.903 "strip_size_kb": 0, 00:14:03.903 "state": "online", 00:14:03.903 "raid_level": "raid1", 00:14:03.903 "superblock": true, 00:14:03.903 "num_base_bdevs": 2, 00:14:03.903 "num_base_bdevs_discovered": 1, 00:14:03.903 "num_base_bdevs_operational": 1, 00:14:03.903 "base_bdevs_list": [ 00:14:03.903 { 00:14:03.903 "name": null, 00:14:03.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.903 "is_configured": false, 00:14:03.904 "data_offset": 0, 00:14:03.904 "data_size": 63488 00:14:03.904 }, 00:14:03.904 { 00:14:03.904 "name": "BaseBdev2", 00:14:03.904 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:03.904 "is_configured": true, 00:14:03.904 "data_offset": 2048, 00:14:03.904 "data_size": 63488 00:14:03.904 } 00:14:03.904 ] 00:14:03.904 }' 00:14:03.904 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.904 17:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.163 17:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.163 17:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.163 17:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.163 [2024-11-20 17:48:31.329083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.163 [2024-11-20 17:48:31.329171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.163 [2024-11-20 17:48:31.329201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:04.163 [2024-11-20 17:48:31.329211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.163 [2024-11-20 17:48:31.329788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.163 [2024-11-20 17:48:31.329816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.163 [2024-11-20 17:48:31.329936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:04.163 [2024-11-20 17:48:31.329955] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:04.163 [2024-11-20 17:48:31.329970] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:04.163 [2024-11-20 17:48:31.329997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.423 [2024-11-20 17:48:31.348831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:04.423 spare 00:14:04.423 17:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.423 17:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:04.423 [2024-11-20 17:48:31.351122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.363 "name": "raid_bdev1", 00:14:05.363 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:05.363 "strip_size_kb": 0, 00:14:05.363 "state": "online", 00:14:05.363 "raid_level": "raid1", 00:14:05.363 "superblock": true, 00:14:05.363 "num_base_bdevs": 2, 00:14:05.363 "num_base_bdevs_discovered": 2, 00:14:05.363 "num_base_bdevs_operational": 2, 00:14:05.363 "process": { 00:14:05.363 "type": "rebuild", 00:14:05.363 "target": "spare", 00:14:05.363 "progress": { 00:14:05.363 "blocks": 20480, 00:14:05.363 "percent": 32 00:14:05.363 } 00:14:05.363 }, 00:14:05.363 "base_bdevs_list": [ 00:14:05.363 { 00:14:05.363 "name": "spare", 00:14:05.363 "uuid": "96d02333-dab0-5ec8-a2fb-d0b3f3da3948", 00:14:05.363 "is_configured": true, 00:14:05.363 "data_offset": 2048, 00:14:05.363 "data_size": 63488 00:14:05.363 }, 00:14:05.363 { 00:14:05.363 "name": "BaseBdev2", 00:14:05.363 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:05.363 "is_configured": true, 00:14:05.363 "data_offset": 2048, 00:14:05.363 "data_size": 63488 00:14:05.363 } 00:14:05.363 ] 00:14:05.363 }' 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.363 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.363 [2024-11-20 17:48:32.494724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.623 [2024-11-20 17:48:32.561307] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.623 [2024-11-20 17:48:32.561408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.623 [2024-11-20 17:48:32.561424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.623 [2024-11-20 17:48:32.561439] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.623 "name": "raid_bdev1", 00:14:05.623 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:05.623 "strip_size_kb": 0, 00:14:05.623 "state": "online", 00:14:05.623 "raid_level": "raid1", 00:14:05.623 "superblock": true, 00:14:05.623 "num_base_bdevs": 2, 00:14:05.623 "num_base_bdevs_discovered": 1, 00:14:05.623 "num_base_bdevs_operational": 1, 00:14:05.623 "base_bdevs_list": [ 00:14:05.623 { 00:14:05.623 "name": null, 00:14:05.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.623 "is_configured": false, 00:14:05.623 "data_offset": 0, 00:14:05.623 "data_size": 63488 00:14:05.623 }, 00:14:05.623 { 00:14:05.623 "name": "BaseBdev2", 00:14:05.623 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:05.623 "is_configured": true, 00:14:05.623 "data_offset": 2048, 00:14:05.623 "data_size": 63488 00:14:05.623 } 00:14:05.623 ] 00:14:05.623 }' 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.623 17:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.193 "name": "raid_bdev1", 00:14:06.193 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:06.193 "strip_size_kb": 0, 00:14:06.193 "state": "online", 00:14:06.193 "raid_level": "raid1", 00:14:06.193 "superblock": true, 00:14:06.193 "num_base_bdevs": 2, 00:14:06.193 "num_base_bdevs_discovered": 1, 00:14:06.193 "num_base_bdevs_operational": 1, 00:14:06.193 "base_bdevs_list": [ 00:14:06.193 { 00:14:06.193 "name": null, 00:14:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.193 "is_configured": false, 00:14:06.193 "data_offset": 0, 00:14:06.193 "data_size": 63488 00:14:06.193 }, 00:14:06.193 { 00:14:06.193 "name": "BaseBdev2", 00:14:06.193 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:06.193 "is_configured": true, 00:14:06.193 "data_offset": 2048, 00:14:06.193 "data_size": 63488 00:14:06.193 } 00:14:06.193 ] 00:14:06.193 }' 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.193 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.193 [2024-11-20 17:48:33.234179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.193 [2024-11-20 17:48:33.234267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.193 [2024-11-20 17:48:33.234296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:06.193 [2024-11-20 17:48:33.234312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.194 [2024-11-20 17:48:33.234843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.194 [2024-11-20 17:48:33.234872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.194 [2024-11-20 17:48:33.234963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:06.194 [2024-11-20 17:48:33.234985] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:06.194 [2024-11-20 17:48:33.234994] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:06.194 [2024-11-20 17:48:33.235020] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:06.194 BaseBdev1 00:14:06.194 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.194 17:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.130 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.130 "name": "raid_bdev1", 00:14:07.130 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:07.130 "strip_size_kb": 0, 00:14:07.130 "state": "online", 00:14:07.130 "raid_level": "raid1", 00:14:07.130 "superblock": true, 00:14:07.130 "num_base_bdevs": 2, 00:14:07.130 "num_base_bdevs_discovered": 1, 00:14:07.130 "num_base_bdevs_operational": 1, 00:14:07.130 "base_bdevs_list": [ 00:14:07.130 { 00:14:07.130 "name": null, 00:14:07.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.131 "is_configured": false, 00:14:07.131 "data_offset": 0, 00:14:07.131 "data_size": 63488 00:14:07.131 }, 00:14:07.131 { 00:14:07.131 "name": "BaseBdev2", 00:14:07.131 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:07.131 "is_configured": true, 00:14:07.131 "data_offset": 2048, 00:14:07.131 "data_size": 63488 00:14:07.131 } 00:14:07.131 ] 00:14:07.131 }' 00:14:07.131 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.131 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.697 "name": "raid_bdev1", 00:14:07.697 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:07.697 "strip_size_kb": 0, 00:14:07.697 "state": "online", 00:14:07.697 "raid_level": "raid1", 00:14:07.697 "superblock": true, 00:14:07.697 "num_base_bdevs": 2, 00:14:07.697 "num_base_bdevs_discovered": 1, 00:14:07.697 "num_base_bdevs_operational": 1, 00:14:07.697 "base_bdevs_list": [ 00:14:07.697 { 00:14:07.697 "name": null, 00:14:07.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.697 "is_configured": false, 00:14:07.697 "data_offset": 0, 00:14:07.697 "data_size": 63488 00:14:07.697 }, 00:14:07.697 { 00:14:07.697 "name": "BaseBdev2", 00:14:07.697 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:07.697 "is_configured": true, 00:14:07.697 "data_offset": 2048, 00:14:07.697 "data_size": 63488 00:14:07.697 } 00:14:07.697 ] 00:14:07.697 }' 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.697 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.956 [2024-11-20 17:48:34.904545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.956 [2024-11-20 17:48:34.904759] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.956 [2024-11-20 17:48:34.904783] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.956 request: 00:14:07.956 { 00:14:07.956 "base_bdev": "BaseBdev1", 00:14:07.956 "raid_bdev": "raid_bdev1", 00:14:07.956 "method": "bdev_raid_add_base_bdev", 00:14:07.956 "req_id": 1 00:14:07.956 } 00:14:07.956 Got JSON-RPC error response 00:14:07.956 response: 00:14:07.956 { 00:14:07.956 "code": -22, 00:14:07.956 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:07.956 } 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.956 17:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.906 "name": "raid_bdev1", 00:14:08.906 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:08.906 "strip_size_kb": 0, 00:14:08.906 "state": "online", 00:14:08.906 "raid_level": "raid1", 00:14:08.906 "superblock": true, 00:14:08.906 "num_base_bdevs": 2, 00:14:08.906 "num_base_bdevs_discovered": 1, 00:14:08.906 "num_base_bdevs_operational": 1, 00:14:08.906 "base_bdevs_list": [ 00:14:08.906 { 00:14:08.906 "name": null, 00:14:08.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.906 "is_configured": false, 00:14:08.906 "data_offset": 0, 00:14:08.906 "data_size": 63488 00:14:08.906 }, 00:14:08.906 { 00:14:08.906 "name": "BaseBdev2", 00:14:08.906 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:08.906 "is_configured": true, 00:14:08.906 "data_offset": 2048, 00:14:08.906 "data_size": 63488 00:14:08.906 } 00:14:08.906 ] 00:14:08.906 }' 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.906 17:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.473 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.473 "name": "raid_bdev1", 00:14:09.473 "uuid": "bd3d1dc3-4148-4720-a72e-c27c20f87e95", 00:14:09.473 "strip_size_kb": 0, 00:14:09.473 "state": "online", 00:14:09.473 "raid_level": "raid1", 00:14:09.473 "superblock": true, 00:14:09.473 "num_base_bdevs": 2, 00:14:09.473 "num_base_bdevs_discovered": 1, 00:14:09.473 "num_base_bdevs_operational": 1, 00:14:09.473 "base_bdevs_list": [ 00:14:09.473 { 00:14:09.473 "name": null, 00:14:09.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.474 "is_configured": false, 00:14:09.474 "data_offset": 0, 00:14:09.474 "data_size": 63488 00:14:09.474 }, 00:14:09.474 { 00:14:09.474 "name": "BaseBdev2", 00:14:09.474 "uuid": "951236bc-3cd6-5d86-9646-3225f0be5ddf", 00:14:09.474 "is_configured": true, 00:14:09.474 "data_offset": 2048, 00:14:09.474 "data_size": 63488 00:14:09.474 } 00:14:09.474 ] 00:14:09.474 }' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77295 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77295 ']' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77295 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77295 00:14:09.474 killing process with pid 77295 00:14:09.474 Received shutdown signal, test time was about 17.060253 seconds 00:14:09.474 00:14:09.474 Latency(us) 00:14:09.474 [2024-11-20T17:48:36.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.474 [2024-11-20T17:48:36.650Z] =================================================================================================================== 00:14:09.474 [2024-11-20T17:48:36.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77295' 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77295 00:14:09.474 17:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77295 00:14:09.474 [2024-11-20 17:48:36.582195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.474 [2024-11-20 17:48:36.582367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.474 [2024-11-20 17:48:36.582451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.474 [2024-11-20 17:48:36.582469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:09.732 [2024-11-20 17:48:36.838687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:11.110 00:14:11.110 real 0m20.421s 00:14:11.110 user 0m26.473s 00:14:11.110 sys 0m2.437s 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.110 ************************************ 00:14:11.110 END TEST raid_rebuild_test_sb_io 00:14:11.110 ************************************ 00:14:11.110 17:48:38 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:11.110 17:48:38 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:11.110 17:48:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:11.110 17:48:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.110 17:48:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.110 ************************************ 00:14:11.110 START TEST raid_rebuild_test 00:14:11.110 ************************************ 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77988 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77988 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77988 ']' 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.110 17:48:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.371 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.371 Zero copy mechanism will not be used. 00:14:11.371 [2024-11-20 17:48:38.312204] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:11.371 [2024-11-20 17:48:38.312361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77988 ] 00:14:11.371 [2024-11-20 17:48:38.490374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.631 [2024-11-20 17:48:38.634744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.889 [2024-11-20 17:48:38.868551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.889 [2024-11-20 17:48:38.868640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.149 BaseBdev1_malloc 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.149 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.149 [2024-11-20 17:48:39.194555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.150 [2024-11-20 17:48:39.194628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.150 [2024-11-20 17:48:39.194651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.150 [2024-11-20 17:48:39.194664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.150 [2024-11-20 17:48:39.197148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.150 [2024-11-20 17:48:39.197193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.150 BaseBdev1 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.150 BaseBdev2_malloc 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.150 [2024-11-20 17:48:39.255284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.150 [2024-11-20 17:48:39.255371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.150 [2024-11-20 17:48:39.255395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.150 [2024-11-20 17:48:39.255407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.150 [2024-11-20 17:48:39.257758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.150 [2024-11-20 17:48:39.257800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.150 BaseBdev2 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.150 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 BaseBdev3_malloc 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 [2024-11-20 17:48:39.332622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:12.409 [2024-11-20 17:48:39.332703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.409 [2024-11-20 17:48:39.332727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.409 [2024-11-20 17:48:39.332740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.409 [2024-11-20 17:48:39.335201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.409 [2024-11-20 17:48:39.335236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.409 BaseBdev3 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 BaseBdev4_malloc 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 [2024-11-20 17:48:39.395379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:12.409 [2024-11-20 17:48:39.395457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.409 [2024-11-20 17:48:39.395482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:12.409 [2024-11-20 17:48:39.395495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.409 [2024-11-20 17:48:39.397923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.409 [2024-11-20 17:48:39.397963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:12.409 BaseBdev4 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 spare_malloc 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 spare_delay 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 [2024-11-20 17:48:39.463352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.409 [2024-11-20 17:48:39.463428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.409 [2024-11-20 17:48:39.463446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:12.409 [2024-11-20 17:48:39.463458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.409 [2024-11-20 17:48:39.465760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.409 [2024-11-20 17:48:39.465802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.409 spare 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 [2024-11-20 17:48:39.475389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.409 [2024-11-20 17:48:39.477567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.409 [2024-11-20 17:48:39.477641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.409 [2024-11-20 17:48:39.477695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.409 [2024-11-20 17:48:39.477779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.409 [2024-11-20 17:48:39.477794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:12.409 [2024-11-20 17:48:39.478109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:12.409 [2024-11-20 17:48:39.478314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.409 [2024-11-20 17:48:39.478335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.409 [2024-11-20 17:48:39.478512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.409 "name": "raid_bdev1", 00:14:12.409 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:12.409 "strip_size_kb": 0, 00:14:12.409 "state": "online", 00:14:12.409 "raid_level": "raid1", 00:14:12.409 "superblock": false, 00:14:12.409 "num_base_bdevs": 4, 00:14:12.409 "num_base_bdevs_discovered": 4, 00:14:12.409 "num_base_bdevs_operational": 4, 00:14:12.409 "base_bdevs_list": [ 00:14:12.409 { 00:14:12.409 "name": "BaseBdev1", 00:14:12.409 "uuid": "76e48e27-42cf-56c0-adcf-2601e9779fd2", 00:14:12.409 "is_configured": true, 00:14:12.409 "data_offset": 0, 00:14:12.409 "data_size": 65536 00:14:12.409 }, 00:14:12.409 { 00:14:12.409 "name": "BaseBdev2", 00:14:12.409 "uuid": "e451cf92-0b29-5d30-a1c0-e2bf55cc1484", 00:14:12.409 "is_configured": true, 00:14:12.409 "data_offset": 0, 00:14:12.409 "data_size": 65536 00:14:12.409 }, 00:14:12.409 { 00:14:12.409 "name": "BaseBdev3", 00:14:12.409 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:12.409 "is_configured": true, 00:14:12.409 "data_offset": 0, 00:14:12.409 "data_size": 65536 00:14:12.409 }, 00:14:12.409 { 00:14:12.409 "name": "BaseBdev4", 00:14:12.409 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:12.409 "is_configured": true, 00:14:12.409 "data_offset": 0, 00:14:12.409 "data_size": 65536 00:14:12.409 } 00:14:12.409 ] 00:14:12.409 }' 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.409 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.979 [2024-11-20 17:48:39.931013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.979 17:48:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.979 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.239 [2024-11-20 17:48:40.206269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:13.239 /dev/nbd0 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.239 1+0 records in 00:14:13.239 1+0 records out 00:14:13.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453111 s, 9.0 MB/s 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:13.239 17:48:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:18.511 65536+0 records in 00:14:18.511 65536+0 records out 00:14:18.511 33554432 bytes (34 MB, 32 MiB) copied, 5.3953 s, 6.2 MB/s 00:14:18.511 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.511 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.511 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.782 [2024-11-20 17:48:45.889451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.782 [2024-11-20 17:48:45.929848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.782 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.058 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.058 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.058 "name": "raid_bdev1", 00:14:19.058 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:19.058 "strip_size_kb": 0, 00:14:19.058 "state": "online", 00:14:19.058 "raid_level": "raid1", 00:14:19.058 "superblock": false, 00:14:19.058 "num_base_bdevs": 4, 00:14:19.058 "num_base_bdevs_discovered": 3, 00:14:19.058 "num_base_bdevs_operational": 3, 00:14:19.058 "base_bdevs_list": [ 00:14:19.058 { 00:14:19.058 "name": null, 00:14:19.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.058 "is_configured": false, 00:14:19.058 "data_offset": 0, 00:14:19.058 "data_size": 65536 00:14:19.058 }, 00:14:19.058 { 00:14:19.058 "name": "BaseBdev2", 00:14:19.058 "uuid": "e451cf92-0b29-5d30-a1c0-e2bf55cc1484", 00:14:19.058 "is_configured": true, 00:14:19.058 "data_offset": 0, 00:14:19.058 "data_size": 65536 00:14:19.058 }, 00:14:19.058 { 00:14:19.058 "name": "BaseBdev3", 00:14:19.058 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:19.058 "is_configured": true, 00:14:19.058 "data_offset": 0, 00:14:19.058 "data_size": 65536 00:14:19.058 }, 00:14:19.058 { 00:14:19.058 "name": "BaseBdev4", 00:14:19.058 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:19.058 "is_configured": true, 00:14:19.058 "data_offset": 0, 00:14:19.058 "data_size": 65536 00:14:19.058 } 00:14:19.058 ] 00:14:19.058 }' 00:14:19.058 17:48:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.058 17:48:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.318 17:48:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.318 17:48:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.318 17:48:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.318 [2024-11-20 17:48:46.345142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.318 [2024-11-20 17:48:46.362127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:19.318 17:48:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.318 17:48:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:19.318 [2024-11-20 17:48:46.364623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.256 "name": "raid_bdev1", 00:14:20.256 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:20.256 "strip_size_kb": 0, 00:14:20.256 "state": "online", 00:14:20.256 "raid_level": "raid1", 00:14:20.256 "superblock": false, 00:14:20.256 "num_base_bdevs": 4, 00:14:20.256 "num_base_bdevs_discovered": 4, 00:14:20.256 "num_base_bdevs_operational": 4, 00:14:20.256 "process": { 00:14:20.256 "type": "rebuild", 00:14:20.256 "target": "spare", 00:14:20.256 "progress": { 00:14:20.256 "blocks": 20480, 00:14:20.256 "percent": 31 00:14:20.256 } 00:14:20.256 }, 00:14:20.256 "base_bdevs_list": [ 00:14:20.256 { 00:14:20.256 "name": "spare", 00:14:20.256 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:20.256 "is_configured": true, 00:14:20.256 "data_offset": 0, 00:14:20.256 "data_size": 65536 00:14:20.256 }, 00:14:20.256 { 00:14:20.256 "name": "BaseBdev2", 00:14:20.256 "uuid": "e451cf92-0b29-5d30-a1c0-e2bf55cc1484", 00:14:20.256 "is_configured": true, 00:14:20.256 "data_offset": 0, 00:14:20.256 "data_size": 65536 00:14:20.256 }, 00:14:20.256 { 00:14:20.256 "name": "BaseBdev3", 00:14:20.256 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:20.256 "is_configured": true, 00:14:20.256 "data_offset": 0, 00:14:20.256 "data_size": 65536 00:14:20.256 }, 00:14:20.256 { 00:14:20.256 "name": "BaseBdev4", 00:14:20.256 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:20.256 "is_configured": true, 00:14:20.256 "data_offset": 0, 00:14:20.256 "data_size": 65536 00:14:20.256 } 00:14:20.256 ] 00:14:20.256 }' 00:14:20.256 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.516 [2024-11-20 17:48:47.508135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.516 [2024-11-20 17:48:47.574842] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:20.516 [2024-11-20 17:48:47.574933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.516 [2024-11-20 17:48:47.574951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.516 [2024-11-20 17:48:47.574963] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.516 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.516 "name": "raid_bdev1", 00:14:20.516 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:20.516 "strip_size_kb": 0, 00:14:20.516 "state": "online", 00:14:20.516 "raid_level": "raid1", 00:14:20.516 "superblock": false, 00:14:20.516 "num_base_bdevs": 4, 00:14:20.516 "num_base_bdevs_discovered": 3, 00:14:20.516 "num_base_bdevs_operational": 3, 00:14:20.516 "base_bdevs_list": [ 00:14:20.516 { 00:14:20.516 "name": null, 00:14:20.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.516 "is_configured": false, 00:14:20.516 "data_offset": 0, 00:14:20.516 "data_size": 65536 00:14:20.516 }, 00:14:20.516 { 00:14:20.516 "name": "BaseBdev2", 00:14:20.516 "uuid": "e451cf92-0b29-5d30-a1c0-e2bf55cc1484", 00:14:20.516 "is_configured": true, 00:14:20.516 "data_offset": 0, 00:14:20.516 "data_size": 65536 00:14:20.516 }, 00:14:20.516 { 00:14:20.516 "name": "BaseBdev3", 00:14:20.516 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:20.516 "is_configured": true, 00:14:20.516 "data_offset": 0, 00:14:20.516 "data_size": 65536 00:14:20.516 }, 00:14:20.516 { 00:14:20.516 "name": "BaseBdev4", 00:14:20.516 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:20.516 "is_configured": true, 00:14:20.517 "data_offset": 0, 00:14:20.517 "data_size": 65536 00:14:20.517 } 00:14:20.517 ] 00:14:20.517 }' 00:14:20.517 17:48:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.517 17:48:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.085 "name": "raid_bdev1", 00:14:21.085 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:21.085 "strip_size_kb": 0, 00:14:21.085 "state": "online", 00:14:21.085 "raid_level": "raid1", 00:14:21.085 "superblock": false, 00:14:21.085 "num_base_bdevs": 4, 00:14:21.085 "num_base_bdevs_discovered": 3, 00:14:21.085 "num_base_bdevs_operational": 3, 00:14:21.085 "base_bdevs_list": [ 00:14:21.085 { 00:14:21.085 "name": null, 00:14:21.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.085 "is_configured": false, 00:14:21.085 "data_offset": 0, 00:14:21.085 "data_size": 65536 00:14:21.085 }, 00:14:21.085 { 00:14:21.085 "name": "BaseBdev2", 00:14:21.085 "uuid": "e451cf92-0b29-5d30-a1c0-e2bf55cc1484", 00:14:21.085 "is_configured": true, 00:14:21.085 "data_offset": 0, 00:14:21.085 "data_size": 65536 00:14:21.085 }, 00:14:21.085 { 00:14:21.085 "name": "BaseBdev3", 00:14:21.085 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:21.085 "is_configured": true, 00:14:21.085 "data_offset": 0, 00:14:21.085 "data_size": 65536 00:14:21.085 }, 00:14:21.085 { 00:14:21.085 "name": "BaseBdev4", 00:14:21.085 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:21.085 "is_configured": true, 00:14:21.085 "data_offset": 0, 00:14:21.085 "data_size": 65536 00:14:21.085 } 00:14:21.085 ] 00:14:21.085 }' 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.085 17:48:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.085 [2024-11-20 17:48:48.183025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.086 [2024-11-20 17:48:48.197335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:21.086 17:48:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.086 17:48:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:21.086 [2024-11-20 17:48:48.199631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.466 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.466 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.466 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.466 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.467 "name": "raid_bdev1", 00:14:22.467 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:22.467 "strip_size_kb": 0, 00:14:22.467 "state": "online", 00:14:22.467 "raid_level": "raid1", 00:14:22.467 "superblock": false, 00:14:22.467 "num_base_bdevs": 4, 00:14:22.467 "num_base_bdevs_discovered": 4, 00:14:22.467 "num_base_bdevs_operational": 4, 00:14:22.467 "process": { 00:14:22.467 "type": "rebuild", 00:14:22.467 "target": "spare", 00:14:22.467 "progress": { 00:14:22.467 "blocks": 20480, 00:14:22.467 "percent": 31 00:14:22.467 } 00:14:22.467 }, 00:14:22.467 "base_bdevs_list": [ 00:14:22.467 { 00:14:22.467 "name": "spare", 00:14:22.467 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 }, 00:14:22.467 { 00:14:22.467 "name": "BaseBdev2", 00:14:22.467 "uuid": "e451cf92-0b29-5d30-a1c0-e2bf55cc1484", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 }, 00:14:22.467 { 00:14:22.467 "name": "BaseBdev3", 00:14:22.467 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 }, 00:14:22.467 { 00:14:22.467 "name": "BaseBdev4", 00:14:22.467 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 } 00:14:22.467 ] 00:14:22.467 }' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.467 [2024-11-20 17:48:49.338204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.467 [2024-11-20 17:48:49.409845] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.467 "name": "raid_bdev1", 00:14:22.467 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:22.467 "strip_size_kb": 0, 00:14:22.467 "state": "online", 00:14:22.467 "raid_level": "raid1", 00:14:22.467 "superblock": false, 00:14:22.467 "num_base_bdevs": 4, 00:14:22.467 "num_base_bdevs_discovered": 3, 00:14:22.467 "num_base_bdevs_operational": 3, 00:14:22.467 "process": { 00:14:22.467 "type": "rebuild", 00:14:22.467 "target": "spare", 00:14:22.467 "progress": { 00:14:22.467 "blocks": 24576, 00:14:22.467 "percent": 37 00:14:22.467 } 00:14:22.467 }, 00:14:22.467 "base_bdevs_list": [ 00:14:22.467 { 00:14:22.467 "name": "spare", 00:14:22.467 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 }, 00:14:22.467 { 00:14:22.467 "name": null, 00:14:22.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.467 "is_configured": false, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 }, 00:14:22.467 { 00:14:22.467 "name": "BaseBdev3", 00:14:22.467 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 }, 00:14:22.467 { 00:14:22.467 "name": "BaseBdev4", 00:14:22.467 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:22.467 "is_configured": true, 00:14:22.467 "data_offset": 0, 00:14:22.467 "data_size": 65536 00:14:22.467 } 00:14:22.467 ] 00:14:22.467 }' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=458 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.467 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.468 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.468 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.468 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.468 17:48:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.468 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.468 "name": "raid_bdev1", 00:14:22.468 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:22.468 "strip_size_kb": 0, 00:14:22.468 "state": "online", 00:14:22.468 "raid_level": "raid1", 00:14:22.468 "superblock": false, 00:14:22.468 "num_base_bdevs": 4, 00:14:22.468 "num_base_bdevs_discovered": 3, 00:14:22.468 "num_base_bdevs_operational": 3, 00:14:22.468 "process": { 00:14:22.468 "type": "rebuild", 00:14:22.468 "target": "spare", 00:14:22.468 "progress": { 00:14:22.468 "blocks": 26624, 00:14:22.468 "percent": 40 00:14:22.468 } 00:14:22.468 }, 00:14:22.468 "base_bdevs_list": [ 00:14:22.468 { 00:14:22.468 "name": "spare", 00:14:22.468 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:22.468 "is_configured": true, 00:14:22.468 "data_offset": 0, 00:14:22.468 "data_size": 65536 00:14:22.468 }, 00:14:22.468 { 00:14:22.468 "name": null, 00:14:22.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.468 "is_configured": false, 00:14:22.468 "data_offset": 0, 00:14:22.468 "data_size": 65536 00:14:22.468 }, 00:14:22.468 { 00:14:22.468 "name": "BaseBdev3", 00:14:22.468 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:22.468 "is_configured": true, 00:14:22.468 "data_offset": 0, 00:14:22.468 "data_size": 65536 00:14:22.468 }, 00:14:22.468 { 00:14:22.468 "name": "BaseBdev4", 00:14:22.468 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:22.468 "is_configured": true, 00:14:22.468 "data_offset": 0, 00:14:22.468 "data_size": 65536 00:14:22.468 } 00:14:22.468 ] 00:14:22.468 }' 00:14:22.468 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.728 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.728 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.728 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.728 17:48:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.667 "name": "raid_bdev1", 00:14:23.667 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:23.667 "strip_size_kb": 0, 00:14:23.667 "state": "online", 00:14:23.667 "raid_level": "raid1", 00:14:23.667 "superblock": false, 00:14:23.667 "num_base_bdevs": 4, 00:14:23.667 "num_base_bdevs_discovered": 3, 00:14:23.667 "num_base_bdevs_operational": 3, 00:14:23.667 "process": { 00:14:23.667 "type": "rebuild", 00:14:23.667 "target": "spare", 00:14:23.667 "progress": { 00:14:23.667 "blocks": 51200, 00:14:23.667 "percent": 78 00:14:23.667 } 00:14:23.667 }, 00:14:23.667 "base_bdevs_list": [ 00:14:23.667 { 00:14:23.667 "name": "spare", 00:14:23.667 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:23.667 "is_configured": true, 00:14:23.667 "data_offset": 0, 00:14:23.667 "data_size": 65536 00:14:23.667 }, 00:14:23.667 { 00:14:23.667 "name": null, 00:14:23.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.667 "is_configured": false, 00:14:23.667 "data_offset": 0, 00:14:23.667 "data_size": 65536 00:14:23.667 }, 00:14:23.667 { 00:14:23.667 "name": "BaseBdev3", 00:14:23.667 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:23.667 "is_configured": true, 00:14:23.667 "data_offset": 0, 00:14:23.667 "data_size": 65536 00:14:23.667 }, 00:14:23.667 { 00:14:23.667 "name": "BaseBdev4", 00:14:23.667 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:23.667 "is_configured": true, 00:14:23.667 "data_offset": 0, 00:14:23.667 "data_size": 65536 00:14:23.667 } 00:14:23.667 ] 00:14:23.667 }' 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.667 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.927 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.927 17:48:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.502 [2024-11-20 17:48:51.426094] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.502 [2024-11-20 17:48:51.426209] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.502 [2024-11-20 17:48:51.426263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.769 "name": "raid_bdev1", 00:14:24.769 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:24.769 "strip_size_kb": 0, 00:14:24.769 "state": "online", 00:14:24.769 "raid_level": "raid1", 00:14:24.769 "superblock": false, 00:14:24.769 "num_base_bdevs": 4, 00:14:24.769 "num_base_bdevs_discovered": 3, 00:14:24.769 "num_base_bdevs_operational": 3, 00:14:24.769 "base_bdevs_list": [ 00:14:24.769 { 00:14:24.769 "name": "spare", 00:14:24.769 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:24.769 "is_configured": true, 00:14:24.769 "data_offset": 0, 00:14:24.769 "data_size": 65536 00:14:24.769 }, 00:14:24.769 { 00:14:24.769 "name": null, 00:14:24.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.769 "is_configured": false, 00:14:24.769 "data_offset": 0, 00:14:24.769 "data_size": 65536 00:14:24.769 }, 00:14:24.769 { 00:14:24.769 "name": "BaseBdev3", 00:14:24.769 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:24.769 "is_configured": true, 00:14:24.769 "data_offset": 0, 00:14:24.769 "data_size": 65536 00:14:24.769 }, 00:14:24.769 { 00:14:24.769 "name": "BaseBdev4", 00:14:24.769 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:24.769 "is_configured": true, 00:14:24.769 "data_offset": 0, 00:14:24.769 "data_size": 65536 00:14:24.769 } 00:14:24.769 ] 00:14:24.769 }' 00:14:24.769 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.030 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.030 17:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.030 "name": "raid_bdev1", 00:14:25.030 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:25.030 "strip_size_kb": 0, 00:14:25.030 "state": "online", 00:14:25.030 "raid_level": "raid1", 00:14:25.030 "superblock": false, 00:14:25.030 "num_base_bdevs": 4, 00:14:25.030 "num_base_bdevs_discovered": 3, 00:14:25.030 "num_base_bdevs_operational": 3, 00:14:25.030 "base_bdevs_list": [ 00:14:25.030 { 00:14:25.030 "name": "spare", 00:14:25.030 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:25.030 "is_configured": true, 00:14:25.030 "data_offset": 0, 00:14:25.030 "data_size": 65536 00:14:25.030 }, 00:14:25.030 { 00:14:25.030 "name": null, 00:14:25.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.030 "is_configured": false, 00:14:25.030 "data_offset": 0, 00:14:25.030 "data_size": 65536 00:14:25.030 }, 00:14:25.030 { 00:14:25.030 "name": "BaseBdev3", 00:14:25.030 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:25.030 "is_configured": true, 00:14:25.030 "data_offset": 0, 00:14:25.030 "data_size": 65536 00:14:25.030 }, 00:14:25.030 { 00:14:25.030 "name": "BaseBdev4", 00:14:25.030 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:25.030 "is_configured": true, 00:14:25.030 "data_offset": 0, 00:14:25.030 "data_size": 65536 00:14:25.030 } 00:14:25.030 ] 00:14:25.030 }' 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.030 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.290 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.290 "name": "raid_bdev1", 00:14:25.290 "uuid": "e5a87233-a83d-4bde-88bd-a3e1eede4118", 00:14:25.290 "strip_size_kb": 0, 00:14:25.290 "state": "online", 00:14:25.290 "raid_level": "raid1", 00:14:25.290 "superblock": false, 00:14:25.290 "num_base_bdevs": 4, 00:14:25.290 "num_base_bdevs_discovered": 3, 00:14:25.290 "num_base_bdevs_operational": 3, 00:14:25.290 "base_bdevs_list": [ 00:14:25.290 { 00:14:25.290 "name": "spare", 00:14:25.290 "uuid": "aba79ee3-faf0-5b81-b853-7203908e046e", 00:14:25.290 "is_configured": true, 00:14:25.290 "data_offset": 0, 00:14:25.290 "data_size": 65536 00:14:25.290 }, 00:14:25.290 { 00:14:25.290 "name": null, 00:14:25.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.290 "is_configured": false, 00:14:25.290 "data_offset": 0, 00:14:25.290 "data_size": 65536 00:14:25.290 }, 00:14:25.290 { 00:14:25.290 "name": "BaseBdev3", 00:14:25.290 "uuid": "b80bcee1-42fc-51d8-8971-409d90cd5087", 00:14:25.290 "is_configured": true, 00:14:25.290 "data_offset": 0, 00:14:25.290 "data_size": 65536 00:14:25.290 }, 00:14:25.290 { 00:14:25.290 "name": "BaseBdev4", 00:14:25.290 "uuid": "927fd625-a9b8-5cd0-893b-bac4404b25f1", 00:14:25.290 "is_configured": true, 00:14:25.290 "data_offset": 0, 00:14:25.290 "data_size": 65536 00:14:25.290 } 00:14:25.290 ] 00:14:25.290 }' 00:14:25.290 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.290 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.550 [2024-11-20 17:48:52.641137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.550 [2024-11-20 17:48:52.641182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.550 [2024-11-20 17:48:52.641301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.550 [2024-11-20 17:48:52.641408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.550 [2024-11-20 17:48:52.641421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.550 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:25.811 /dev/nbd0 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.811 1+0 records in 00:14:25.811 1+0 records out 00:14:25.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437903 s, 9.4 MB/s 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.811 17:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.071 /dev/nbd1 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.071 1+0 records in 00:14:26.071 1+0 records out 00:14:26.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329262 s, 12.4 MB/s 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.071 17:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.330 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.590 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77988 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77988 ']' 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77988 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77988 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.850 killing process with pid 77988 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77988' 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77988 00:14:26.850 Received shutdown signal, test time was about 60.000000 seconds 00:14:26.850 00:14:26.850 Latency(us) 00:14:26.850 [2024-11-20T17:48:54.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.850 [2024-11-20T17:48:54.026Z] =================================================================================================================== 00:14:26.850 [2024-11-20T17:48:54.026Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:26.850 [2024-11-20 17:48:53.886353] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.850 17:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77988 00:14:27.420 [2024-11-20 17:48:54.434934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.801 17:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:28.801 00:14:28.801 real 0m17.440s 00:14:28.801 user 0m19.483s 00:14:28.801 sys 0m3.299s 00:14:28.801 17:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.801 17:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.801 ************************************ 00:14:28.802 END TEST raid_rebuild_test 00:14:28.802 ************************************ 00:14:28.802 17:48:55 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:28.802 17:48:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:28.802 17:48:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.802 17:48:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.802 ************************************ 00:14:28.802 START TEST raid_rebuild_test_sb 00:14:28.802 ************************************ 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78431 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78431 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78431 ']' 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.802 17:48:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.802 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.802 Zero copy mechanism will not be used. 00:14:28.802 [2024-11-20 17:48:55.830575] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:28.802 [2024-11-20 17:48:55.830703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78431 ] 00:14:29.062 [2024-11-20 17:48:56.009751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.062 [2024-11-20 17:48:56.148037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.322 [2024-11-20 17:48:56.382653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.322 [2024-11-20 17:48:56.382856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.582 BaseBdev1_malloc 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.582 [2024-11-20 17:48:56.701403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.582 [2024-11-20 17:48:56.701514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.582 [2024-11-20 17:48:56.701557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.582 [2024-11-20 17:48:56.701589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.582 [2024-11-20 17:48:56.703959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.582 [2024-11-20 17:48:56.704045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.582 BaseBdev1 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.582 BaseBdev2_malloc 00:14:29.582 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-20 17:48:56.762018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:29.843 [2024-11-20 17:48:56.762163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-20 17:48:56.762191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.843 [2024-11-20 17:48:56.762203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-20 17:48:56.764558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-20 17:48:56.764594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.843 BaseBdev2 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 BaseBdev3_malloc 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-20 17:48:56.832463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:29.843 [2024-11-20 17:48:56.832557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-20 17:48:56.832595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.843 [2024-11-20 17:48:56.832625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-20 17:48:56.835019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-20 17:48:56.835118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.843 BaseBdev3 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 BaseBdev4_malloc 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.843 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-20 17:48:56.894791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:29.844 [2024-11-20 17:48:56.894899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.844 [2024-11-20 17:48:56.894926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:29.844 [2024-11-20 17:48:56.894938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.844 [2024-11-20 17:48:56.897352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.844 [2024-11-20 17:48:56.897391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:29.844 BaseBdev4 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 spare_malloc 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 spare_delay 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 [2024-11-20 17:48:56.968128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.844 [2024-11-20 17:48:56.968237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.844 [2024-11-20 17:48:56.968276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:29.844 [2024-11-20 17:48:56.968288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.844 [2024-11-20 17:48:56.970735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.844 [2024-11-20 17:48:56.970810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.844 spare 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 [2024-11-20 17:48:56.980175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.844 [2024-11-20 17:48:56.982479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.844 [2024-11-20 17:48:56.982586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.844 [2024-11-20 17:48:56.982678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.844 [2024-11-20 17:48:56.982917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.844 [2024-11-20 17:48:56.982969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:29.844 [2024-11-20 17:48:56.983272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:29.844 [2024-11-20 17:48:56.983504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.844 [2024-11-20 17:48:56.983559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.844 [2024-11-20 17:48:56.983757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.844 17:48:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.104 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.104 "name": "raid_bdev1", 00:14:30.104 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:30.104 "strip_size_kb": 0, 00:14:30.104 "state": "online", 00:14:30.104 "raid_level": "raid1", 00:14:30.104 "superblock": true, 00:14:30.104 "num_base_bdevs": 4, 00:14:30.104 "num_base_bdevs_discovered": 4, 00:14:30.104 "num_base_bdevs_operational": 4, 00:14:30.104 "base_bdevs_list": [ 00:14:30.104 { 00:14:30.104 "name": "BaseBdev1", 00:14:30.104 "uuid": "c52bc55b-aade-5810-bf62-1aec31c71dd7", 00:14:30.104 "is_configured": true, 00:14:30.104 "data_offset": 2048, 00:14:30.104 "data_size": 63488 00:14:30.104 }, 00:14:30.104 { 00:14:30.104 "name": "BaseBdev2", 00:14:30.104 "uuid": "9a7dda9c-9f4c-5b6b-85f3-23d4a4692858", 00:14:30.104 "is_configured": true, 00:14:30.104 "data_offset": 2048, 00:14:30.104 "data_size": 63488 00:14:30.104 }, 00:14:30.104 { 00:14:30.104 "name": "BaseBdev3", 00:14:30.104 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:30.104 "is_configured": true, 00:14:30.104 "data_offset": 2048, 00:14:30.104 "data_size": 63488 00:14:30.104 }, 00:14:30.104 { 00:14:30.104 "name": "BaseBdev4", 00:14:30.104 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:30.104 "is_configured": true, 00:14:30.104 "data_offset": 2048, 00:14:30.104 "data_size": 63488 00:14:30.104 } 00:14:30.104 ] 00:14:30.104 }' 00:14:30.104 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.104 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.364 [2024-11-20 17:48:57.463707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:30.364 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:30.624 [2024-11-20 17:48:57.750962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:30.624 /dev/nbd0 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:30.624 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.884 1+0 records in 00:14:30.884 1+0 records out 00:14:30.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238701 s, 17.2 MB/s 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:30.884 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:30.885 17:48:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:36.193 63488+0 records in 00:14:36.193 63488+0 records out 00:14:36.193 32505856 bytes (33 MB, 31 MiB) copied, 5.19073 s, 6.3 MB/s 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.193 [2024-11-20 17:49:03.221975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.193 [2024-11-20 17:49:03.258036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.193 "name": "raid_bdev1", 00:14:36.193 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:36.193 "strip_size_kb": 0, 00:14:36.193 "state": "online", 00:14:36.193 "raid_level": "raid1", 00:14:36.193 "superblock": true, 00:14:36.193 "num_base_bdevs": 4, 00:14:36.193 "num_base_bdevs_discovered": 3, 00:14:36.193 "num_base_bdevs_operational": 3, 00:14:36.193 "base_bdevs_list": [ 00:14:36.193 { 00:14:36.193 "name": null, 00:14:36.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.193 "is_configured": false, 00:14:36.193 "data_offset": 0, 00:14:36.193 "data_size": 63488 00:14:36.193 }, 00:14:36.193 { 00:14:36.193 "name": "BaseBdev2", 00:14:36.193 "uuid": "9a7dda9c-9f4c-5b6b-85f3-23d4a4692858", 00:14:36.193 "is_configured": true, 00:14:36.193 "data_offset": 2048, 00:14:36.193 "data_size": 63488 00:14:36.193 }, 00:14:36.193 { 00:14:36.193 "name": "BaseBdev3", 00:14:36.193 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:36.193 "is_configured": true, 00:14:36.193 "data_offset": 2048, 00:14:36.193 "data_size": 63488 00:14:36.193 }, 00:14:36.193 { 00:14:36.193 "name": "BaseBdev4", 00:14:36.193 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:36.193 "is_configured": true, 00:14:36.193 "data_offset": 2048, 00:14:36.193 "data_size": 63488 00:14:36.193 } 00:14:36.193 ] 00:14:36.193 }' 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.193 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.762 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.762 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.762 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.762 [2024-11-20 17:49:03.697259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.762 [2024-11-20 17:49:03.713591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:36.762 17:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.762 17:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:36.762 [2024-11-20 17:49:03.715891] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.704 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.704 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.704 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.704 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.704 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.704 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.705 "name": "raid_bdev1", 00:14:37.705 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:37.705 "strip_size_kb": 0, 00:14:37.705 "state": "online", 00:14:37.705 "raid_level": "raid1", 00:14:37.705 "superblock": true, 00:14:37.705 "num_base_bdevs": 4, 00:14:37.705 "num_base_bdevs_discovered": 4, 00:14:37.705 "num_base_bdevs_operational": 4, 00:14:37.705 "process": { 00:14:37.705 "type": "rebuild", 00:14:37.705 "target": "spare", 00:14:37.705 "progress": { 00:14:37.705 "blocks": 20480, 00:14:37.705 "percent": 32 00:14:37.705 } 00:14:37.705 }, 00:14:37.705 "base_bdevs_list": [ 00:14:37.705 { 00:14:37.705 "name": "spare", 00:14:37.705 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:37.705 "is_configured": true, 00:14:37.705 "data_offset": 2048, 00:14:37.705 "data_size": 63488 00:14:37.705 }, 00:14:37.705 { 00:14:37.705 "name": "BaseBdev2", 00:14:37.705 "uuid": "9a7dda9c-9f4c-5b6b-85f3-23d4a4692858", 00:14:37.705 "is_configured": true, 00:14:37.705 "data_offset": 2048, 00:14:37.705 "data_size": 63488 00:14:37.705 }, 00:14:37.705 { 00:14:37.705 "name": "BaseBdev3", 00:14:37.705 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:37.705 "is_configured": true, 00:14:37.705 "data_offset": 2048, 00:14:37.705 "data_size": 63488 00:14:37.705 }, 00:14:37.705 { 00:14:37.705 "name": "BaseBdev4", 00:14:37.705 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:37.705 "is_configured": true, 00:14:37.705 "data_offset": 2048, 00:14:37.705 "data_size": 63488 00:14:37.705 } 00:14:37.705 ] 00:14:37.705 }' 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.705 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.966 [2024-11-20 17:49:04.879683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.966 [2024-11-20 17:49:04.925429] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.966 [2024-11-20 17:49:04.925582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.966 [2024-11-20 17:49:04.925632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.966 [2024-11-20 17:49:04.925662] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.966 17:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.966 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.966 "name": "raid_bdev1", 00:14:37.966 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:37.966 "strip_size_kb": 0, 00:14:37.966 "state": "online", 00:14:37.966 "raid_level": "raid1", 00:14:37.966 "superblock": true, 00:14:37.966 "num_base_bdevs": 4, 00:14:37.966 "num_base_bdevs_discovered": 3, 00:14:37.966 "num_base_bdevs_operational": 3, 00:14:37.966 "base_bdevs_list": [ 00:14:37.966 { 00:14:37.966 "name": null, 00:14:37.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.966 "is_configured": false, 00:14:37.966 "data_offset": 0, 00:14:37.966 "data_size": 63488 00:14:37.966 }, 00:14:37.966 { 00:14:37.966 "name": "BaseBdev2", 00:14:37.966 "uuid": "9a7dda9c-9f4c-5b6b-85f3-23d4a4692858", 00:14:37.966 "is_configured": true, 00:14:37.966 "data_offset": 2048, 00:14:37.966 "data_size": 63488 00:14:37.966 }, 00:14:37.966 { 00:14:37.966 "name": "BaseBdev3", 00:14:37.966 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:37.966 "is_configured": true, 00:14:37.966 "data_offset": 2048, 00:14:37.966 "data_size": 63488 00:14:37.966 }, 00:14:37.966 { 00:14:37.966 "name": "BaseBdev4", 00:14:37.966 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:37.966 "is_configured": true, 00:14:37.966 "data_offset": 2048, 00:14:37.966 "data_size": 63488 00:14:37.966 } 00:14:37.966 ] 00:14:37.966 }' 00:14:37.966 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.966 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.536 "name": "raid_bdev1", 00:14:38.536 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:38.536 "strip_size_kb": 0, 00:14:38.536 "state": "online", 00:14:38.536 "raid_level": "raid1", 00:14:38.536 "superblock": true, 00:14:38.536 "num_base_bdevs": 4, 00:14:38.536 "num_base_bdevs_discovered": 3, 00:14:38.536 "num_base_bdevs_operational": 3, 00:14:38.536 "base_bdevs_list": [ 00:14:38.536 { 00:14:38.536 "name": null, 00:14:38.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.536 "is_configured": false, 00:14:38.536 "data_offset": 0, 00:14:38.536 "data_size": 63488 00:14:38.536 }, 00:14:38.536 { 00:14:38.536 "name": "BaseBdev2", 00:14:38.536 "uuid": "9a7dda9c-9f4c-5b6b-85f3-23d4a4692858", 00:14:38.536 "is_configured": true, 00:14:38.536 "data_offset": 2048, 00:14:38.536 "data_size": 63488 00:14:38.536 }, 00:14:38.536 { 00:14:38.536 "name": "BaseBdev3", 00:14:38.536 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:38.536 "is_configured": true, 00:14:38.536 "data_offset": 2048, 00:14:38.536 "data_size": 63488 00:14:38.536 }, 00:14:38.536 { 00:14:38.536 "name": "BaseBdev4", 00:14:38.536 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:38.536 "is_configured": true, 00:14:38.536 "data_offset": 2048, 00:14:38.536 "data_size": 63488 00:14:38.536 } 00:14:38.536 ] 00:14:38.536 }' 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.536 [2024-11-20 17:49:05.593747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.536 [2024-11-20 17:49:05.608923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.536 17:49:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:38.536 [2024-11-20 17:49:05.611325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.475 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.736 "name": "raid_bdev1", 00:14:39.736 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:39.736 "strip_size_kb": 0, 00:14:39.736 "state": "online", 00:14:39.736 "raid_level": "raid1", 00:14:39.736 "superblock": true, 00:14:39.736 "num_base_bdevs": 4, 00:14:39.736 "num_base_bdevs_discovered": 4, 00:14:39.736 "num_base_bdevs_operational": 4, 00:14:39.736 "process": { 00:14:39.736 "type": "rebuild", 00:14:39.736 "target": "spare", 00:14:39.736 "progress": { 00:14:39.736 "blocks": 20480, 00:14:39.736 "percent": 32 00:14:39.736 } 00:14:39.736 }, 00:14:39.736 "base_bdevs_list": [ 00:14:39.736 { 00:14:39.736 "name": "spare", 00:14:39.736 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:39.736 "is_configured": true, 00:14:39.736 "data_offset": 2048, 00:14:39.736 "data_size": 63488 00:14:39.736 }, 00:14:39.736 { 00:14:39.736 "name": "BaseBdev2", 00:14:39.736 "uuid": "9a7dda9c-9f4c-5b6b-85f3-23d4a4692858", 00:14:39.736 "is_configured": true, 00:14:39.736 "data_offset": 2048, 00:14:39.736 "data_size": 63488 00:14:39.736 }, 00:14:39.736 { 00:14:39.736 "name": "BaseBdev3", 00:14:39.736 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:39.736 "is_configured": true, 00:14:39.736 "data_offset": 2048, 00:14:39.736 "data_size": 63488 00:14:39.736 }, 00:14:39.736 { 00:14:39.736 "name": "BaseBdev4", 00:14:39.736 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:39.736 "is_configured": true, 00:14:39.736 "data_offset": 2048, 00:14:39.736 "data_size": 63488 00:14:39.736 } 00:14:39.736 ] 00:14:39.736 }' 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:39.736 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.736 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.736 [2024-11-20 17:49:06.774744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.997 [2024-11-20 17:49:06.920305] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.997 "name": "raid_bdev1", 00:14:39.997 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:39.997 "strip_size_kb": 0, 00:14:39.997 "state": "online", 00:14:39.997 "raid_level": "raid1", 00:14:39.997 "superblock": true, 00:14:39.997 "num_base_bdevs": 4, 00:14:39.997 "num_base_bdevs_discovered": 3, 00:14:39.997 "num_base_bdevs_operational": 3, 00:14:39.997 "process": { 00:14:39.997 "type": "rebuild", 00:14:39.997 "target": "spare", 00:14:39.997 "progress": { 00:14:39.997 "blocks": 24576, 00:14:39.997 "percent": 38 00:14:39.997 } 00:14:39.997 }, 00:14:39.997 "base_bdevs_list": [ 00:14:39.997 { 00:14:39.997 "name": "spare", 00:14:39.997 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:39.997 "is_configured": true, 00:14:39.997 "data_offset": 2048, 00:14:39.997 "data_size": 63488 00:14:39.997 }, 00:14:39.997 { 00:14:39.997 "name": null, 00:14:39.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.997 "is_configured": false, 00:14:39.997 "data_offset": 0, 00:14:39.997 "data_size": 63488 00:14:39.997 }, 00:14:39.997 { 00:14:39.997 "name": "BaseBdev3", 00:14:39.997 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:39.997 "is_configured": true, 00:14:39.997 "data_offset": 2048, 00:14:39.997 "data_size": 63488 00:14:39.997 }, 00:14:39.997 { 00:14:39.997 "name": "BaseBdev4", 00:14:39.997 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:39.997 "is_configured": true, 00:14:39.997 "data_offset": 2048, 00:14:39.997 "data_size": 63488 00:14:39.997 } 00:14:39.997 ] 00:14:39.997 }' 00:14:39.997 17:49:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.997 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=476 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.998 "name": "raid_bdev1", 00:14:39.998 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:39.998 "strip_size_kb": 0, 00:14:39.998 "state": "online", 00:14:39.998 "raid_level": "raid1", 00:14:39.998 "superblock": true, 00:14:39.998 "num_base_bdevs": 4, 00:14:39.998 "num_base_bdevs_discovered": 3, 00:14:39.998 "num_base_bdevs_operational": 3, 00:14:39.998 "process": { 00:14:39.998 "type": "rebuild", 00:14:39.998 "target": "spare", 00:14:39.998 "progress": { 00:14:39.998 "blocks": 26624, 00:14:39.998 "percent": 41 00:14:39.998 } 00:14:39.998 }, 00:14:39.998 "base_bdevs_list": [ 00:14:39.998 { 00:14:39.998 "name": "spare", 00:14:39.998 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:39.998 "is_configured": true, 00:14:39.998 "data_offset": 2048, 00:14:39.998 "data_size": 63488 00:14:39.998 }, 00:14:39.998 { 00:14:39.998 "name": null, 00:14:39.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.998 "is_configured": false, 00:14:39.998 "data_offset": 0, 00:14:39.998 "data_size": 63488 00:14:39.998 }, 00:14:39.998 { 00:14:39.998 "name": "BaseBdev3", 00:14:39.998 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:39.998 "is_configured": true, 00:14:39.998 "data_offset": 2048, 00:14:39.998 "data_size": 63488 00:14:39.998 }, 00:14:39.998 { 00:14:39.998 "name": "BaseBdev4", 00:14:39.998 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:39.998 "is_configured": true, 00:14:39.998 "data_offset": 2048, 00:14:39.998 "data_size": 63488 00:14:39.998 } 00:14:39.998 ] 00:14:39.998 }' 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.998 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.258 17:49:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.198 "name": "raid_bdev1", 00:14:41.198 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:41.198 "strip_size_kb": 0, 00:14:41.198 "state": "online", 00:14:41.198 "raid_level": "raid1", 00:14:41.198 "superblock": true, 00:14:41.198 "num_base_bdevs": 4, 00:14:41.198 "num_base_bdevs_discovered": 3, 00:14:41.198 "num_base_bdevs_operational": 3, 00:14:41.198 "process": { 00:14:41.198 "type": "rebuild", 00:14:41.198 "target": "spare", 00:14:41.198 "progress": { 00:14:41.198 "blocks": 49152, 00:14:41.198 "percent": 77 00:14:41.198 } 00:14:41.198 }, 00:14:41.198 "base_bdevs_list": [ 00:14:41.198 { 00:14:41.198 "name": "spare", 00:14:41.198 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:41.198 "is_configured": true, 00:14:41.198 "data_offset": 2048, 00:14:41.198 "data_size": 63488 00:14:41.198 }, 00:14:41.198 { 00:14:41.198 "name": null, 00:14:41.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.198 "is_configured": false, 00:14:41.198 "data_offset": 0, 00:14:41.198 "data_size": 63488 00:14:41.198 }, 00:14:41.198 { 00:14:41.198 "name": "BaseBdev3", 00:14:41.198 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:41.198 "is_configured": true, 00:14:41.198 "data_offset": 2048, 00:14:41.198 "data_size": 63488 00:14:41.198 }, 00:14:41.198 { 00:14:41.198 "name": "BaseBdev4", 00:14:41.198 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:41.198 "is_configured": true, 00:14:41.198 "data_offset": 2048, 00:14:41.198 "data_size": 63488 00:14:41.198 } 00:14:41.198 ] 00:14:41.198 }' 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.198 17:49:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.768 [2024-11-20 17:49:08.835361] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:41.768 [2024-11-20 17:49:08.835463] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:41.768 [2024-11-20 17:49:08.835601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.337 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.337 "name": "raid_bdev1", 00:14:42.337 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:42.337 "strip_size_kb": 0, 00:14:42.337 "state": "online", 00:14:42.337 "raid_level": "raid1", 00:14:42.337 "superblock": true, 00:14:42.337 "num_base_bdevs": 4, 00:14:42.337 "num_base_bdevs_discovered": 3, 00:14:42.337 "num_base_bdevs_operational": 3, 00:14:42.337 "base_bdevs_list": [ 00:14:42.337 { 00:14:42.337 "name": "spare", 00:14:42.337 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:42.337 "is_configured": true, 00:14:42.337 "data_offset": 2048, 00:14:42.337 "data_size": 63488 00:14:42.337 }, 00:14:42.337 { 00:14:42.337 "name": null, 00:14:42.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.337 "is_configured": false, 00:14:42.337 "data_offset": 0, 00:14:42.337 "data_size": 63488 00:14:42.337 }, 00:14:42.337 { 00:14:42.337 "name": "BaseBdev3", 00:14:42.337 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:42.337 "is_configured": true, 00:14:42.337 "data_offset": 2048, 00:14:42.337 "data_size": 63488 00:14:42.337 }, 00:14:42.337 { 00:14:42.337 "name": "BaseBdev4", 00:14:42.337 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:42.337 "is_configured": true, 00:14:42.337 "data_offset": 2048, 00:14:42.338 "data_size": 63488 00:14:42.338 } 00:14:42.338 ] 00:14:42.338 }' 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.338 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.597 "name": "raid_bdev1", 00:14:42.597 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:42.597 "strip_size_kb": 0, 00:14:42.597 "state": "online", 00:14:42.597 "raid_level": "raid1", 00:14:42.597 "superblock": true, 00:14:42.597 "num_base_bdevs": 4, 00:14:42.597 "num_base_bdevs_discovered": 3, 00:14:42.597 "num_base_bdevs_operational": 3, 00:14:42.597 "base_bdevs_list": [ 00:14:42.597 { 00:14:42.597 "name": "spare", 00:14:42.597 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:42.597 "is_configured": true, 00:14:42.597 "data_offset": 2048, 00:14:42.597 "data_size": 63488 00:14:42.597 }, 00:14:42.597 { 00:14:42.597 "name": null, 00:14:42.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.597 "is_configured": false, 00:14:42.597 "data_offset": 0, 00:14:42.597 "data_size": 63488 00:14:42.597 }, 00:14:42.597 { 00:14:42.597 "name": "BaseBdev3", 00:14:42.597 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:42.597 "is_configured": true, 00:14:42.597 "data_offset": 2048, 00:14:42.597 "data_size": 63488 00:14:42.597 }, 00:14:42.597 { 00:14:42.597 "name": "BaseBdev4", 00:14:42.597 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:42.597 "is_configured": true, 00:14:42.597 "data_offset": 2048, 00:14:42.597 "data_size": 63488 00:14:42.597 } 00:14:42.597 ] 00:14:42.597 }' 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.597 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.598 "name": "raid_bdev1", 00:14:42.598 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:42.598 "strip_size_kb": 0, 00:14:42.598 "state": "online", 00:14:42.598 "raid_level": "raid1", 00:14:42.598 "superblock": true, 00:14:42.598 "num_base_bdevs": 4, 00:14:42.598 "num_base_bdevs_discovered": 3, 00:14:42.598 "num_base_bdevs_operational": 3, 00:14:42.598 "base_bdevs_list": [ 00:14:42.598 { 00:14:42.598 "name": "spare", 00:14:42.598 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:42.598 "is_configured": true, 00:14:42.598 "data_offset": 2048, 00:14:42.598 "data_size": 63488 00:14:42.598 }, 00:14:42.598 { 00:14:42.598 "name": null, 00:14:42.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.598 "is_configured": false, 00:14:42.598 "data_offset": 0, 00:14:42.598 "data_size": 63488 00:14:42.598 }, 00:14:42.598 { 00:14:42.598 "name": "BaseBdev3", 00:14:42.598 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:42.598 "is_configured": true, 00:14:42.598 "data_offset": 2048, 00:14:42.598 "data_size": 63488 00:14:42.598 }, 00:14:42.598 { 00:14:42.598 "name": "BaseBdev4", 00:14:42.598 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:42.598 "is_configured": true, 00:14:42.598 "data_offset": 2048, 00:14:42.598 "data_size": 63488 00:14:42.598 } 00:14:42.598 ] 00:14:42.598 }' 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.598 17:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.858 [2024-11-20 17:49:10.014664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.858 [2024-11-20 17:49:10.014700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.858 [2024-11-20 17:49:10.014808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.858 [2024-11-20 17:49:10.014902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.858 [2024-11-20 17:49:10.014914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.858 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:43.118 /dev/nbd0 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.118 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.118 1+0 records in 00:14:43.118 1+0 records out 00:14:43.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401729 s, 10.2 MB/s 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:43.377 /dev/nbd1 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.377 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.637 1+0 records in 00:14:43.637 1+0 records out 00:14:43.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470997 s, 8.7 MB/s 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.637 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.897 17:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.156 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.157 [2024-11-20 17:49:11.218862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.157 [2024-11-20 17:49:11.218928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.157 [2024-11-20 17:49:11.218954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:44.157 [2024-11-20 17:49:11.218967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.157 [2024-11-20 17:49:11.221609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.157 [2024-11-20 17:49:11.221647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.157 [2024-11-20 17:49:11.221764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:44.157 [2024-11-20 17:49:11.221826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.157 [2024-11-20 17:49:11.221977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.157 [2024-11-20 17:49:11.222092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.157 spare 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.157 [2024-11-20 17:49:11.321994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:44.157 [2024-11-20 17:49:11.322023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.157 [2024-11-20 17:49:11.322335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:44.157 [2024-11-20 17:49:11.322506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:44.157 [2024-11-20 17:49:11.322518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:44.157 [2024-11-20 17:49:11.322733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.157 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.416 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.416 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.416 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.416 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.416 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.416 "name": "raid_bdev1", 00:14:44.416 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:44.416 "strip_size_kb": 0, 00:14:44.416 "state": "online", 00:14:44.416 "raid_level": "raid1", 00:14:44.416 "superblock": true, 00:14:44.416 "num_base_bdevs": 4, 00:14:44.416 "num_base_bdevs_discovered": 3, 00:14:44.416 "num_base_bdevs_operational": 3, 00:14:44.416 "base_bdevs_list": [ 00:14:44.416 { 00:14:44.416 "name": "spare", 00:14:44.416 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:44.416 "is_configured": true, 00:14:44.416 "data_offset": 2048, 00:14:44.416 "data_size": 63488 00:14:44.416 }, 00:14:44.416 { 00:14:44.416 "name": null, 00:14:44.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.416 "is_configured": false, 00:14:44.416 "data_offset": 2048, 00:14:44.416 "data_size": 63488 00:14:44.416 }, 00:14:44.416 { 00:14:44.416 "name": "BaseBdev3", 00:14:44.416 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:44.416 "is_configured": true, 00:14:44.416 "data_offset": 2048, 00:14:44.416 "data_size": 63488 00:14:44.416 }, 00:14:44.416 { 00:14:44.416 "name": "BaseBdev4", 00:14:44.416 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:44.416 "is_configured": true, 00:14:44.416 "data_offset": 2048, 00:14:44.416 "data_size": 63488 00:14:44.416 } 00:14:44.416 ] 00:14:44.416 }' 00:14:44.417 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.417 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.676 "name": "raid_bdev1", 00:14:44.676 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:44.676 "strip_size_kb": 0, 00:14:44.676 "state": "online", 00:14:44.676 "raid_level": "raid1", 00:14:44.676 "superblock": true, 00:14:44.676 "num_base_bdevs": 4, 00:14:44.676 "num_base_bdevs_discovered": 3, 00:14:44.676 "num_base_bdevs_operational": 3, 00:14:44.676 "base_bdevs_list": [ 00:14:44.676 { 00:14:44.676 "name": "spare", 00:14:44.676 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:44.676 "is_configured": true, 00:14:44.676 "data_offset": 2048, 00:14:44.676 "data_size": 63488 00:14:44.676 }, 00:14:44.676 { 00:14:44.676 "name": null, 00:14:44.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.676 "is_configured": false, 00:14:44.676 "data_offset": 2048, 00:14:44.676 "data_size": 63488 00:14:44.676 }, 00:14:44.676 { 00:14:44.676 "name": "BaseBdev3", 00:14:44.676 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:44.676 "is_configured": true, 00:14:44.676 "data_offset": 2048, 00:14:44.676 "data_size": 63488 00:14:44.676 }, 00:14:44.676 { 00:14:44.676 "name": "BaseBdev4", 00:14:44.676 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:44.676 "is_configured": true, 00:14:44.676 "data_offset": 2048, 00:14:44.676 "data_size": 63488 00:14:44.676 } 00:14:44.676 ] 00:14:44.676 }' 00:14:44.676 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.936 [2024-11-20 17:49:11.961721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.936 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.937 17:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.937 17:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.937 "name": "raid_bdev1", 00:14:44.937 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:44.937 "strip_size_kb": 0, 00:14:44.937 "state": "online", 00:14:44.937 "raid_level": "raid1", 00:14:44.937 "superblock": true, 00:14:44.937 "num_base_bdevs": 4, 00:14:44.937 "num_base_bdevs_discovered": 2, 00:14:44.937 "num_base_bdevs_operational": 2, 00:14:44.937 "base_bdevs_list": [ 00:14:44.937 { 00:14:44.937 "name": null, 00:14:44.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.937 "is_configured": false, 00:14:44.937 "data_offset": 0, 00:14:44.937 "data_size": 63488 00:14:44.937 }, 00:14:44.937 { 00:14:44.937 "name": null, 00:14:44.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.937 "is_configured": false, 00:14:44.937 "data_offset": 2048, 00:14:44.937 "data_size": 63488 00:14:44.937 }, 00:14:44.937 { 00:14:44.937 "name": "BaseBdev3", 00:14:44.937 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:44.937 "is_configured": true, 00:14:44.937 "data_offset": 2048, 00:14:44.937 "data_size": 63488 00:14:44.937 }, 00:14:44.937 { 00:14:44.937 "name": "BaseBdev4", 00:14:44.937 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:44.937 "is_configured": true, 00:14:44.937 "data_offset": 2048, 00:14:44.937 "data_size": 63488 00:14:44.937 } 00:14:44.937 ] 00:14:44.937 }' 00:14:44.937 17:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.937 17:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.507 17:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.507 17:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.507 17:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.507 [2024-11-20 17:49:12.429004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.507 [2024-11-20 17:49:12.429331] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:45.507 [2024-11-20 17:49:12.429400] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.507 [2024-11-20 17:49:12.429468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.507 [2024-11-20 17:49:12.443953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:45.507 17:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.507 17:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:45.507 [2024-11-20 17:49:12.446199] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.446 "name": "raid_bdev1", 00:14:46.446 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:46.446 "strip_size_kb": 0, 00:14:46.446 "state": "online", 00:14:46.446 "raid_level": "raid1", 00:14:46.446 "superblock": true, 00:14:46.446 "num_base_bdevs": 4, 00:14:46.446 "num_base_bdevs_discovered": 3, 00:14:46.446 "num_base_bdevs_operational": 3, 00:14:46.446 "process": { 00:14:46.446 "type": "rebuild", 00:14:46.446 "target": "spare", 00:14:46.446 "progress": { 00:14:46.446 "blocks": 20480, 00:14:46.446 "percent": 32 00:14:46.446 } 00:14:46.446 }, 00:14:46.446 "base_bdevs_list": [ 00:14:46.446 { 00:14:46.446 "name": "spare", 00:14:46.446 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:46.446 "is_configured": true, 00:14:46.446 "data_offset": 2048, 00:14:46.446 "data_size": 63488 00:14:46.446 }, 00:14:46.446 { 00:14:46.446 "name": null, 00:14:46.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.446 "is_configured": false, 00:14:46.446 "data_offset": 2048, 00:14:46.446 "data_size": 63488 00:14:46.446 }, 00:14:46.446 { 00:14:46.446 "name": "BaseBdev3", 00:14:46.446 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:46.446 "is_configured": true, 00:14:46.446 "data_offset": 2048, 00:14:46.446 "data_size": 63488 00:14:46.446 }, 00:14:46.446 { 00:14:46.446 "name": "BaseBdev4", 00:14:46.446 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:46.446 "is_configured": true, 00:14:46.446 "data_offset": 2048, 00:14:46.446 "data_size": 63488 00:14:46.446 } 00:14:46.446 ] 00:14:46.446 }' 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.446 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.446 [2024-11-20 17:49:13.613571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.707 [2024-11-20 17:49:13.655183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.707 [2024-11-20 17:49:13.655242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.707 [2024-11-20 17:49:13.655261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.707 [2024-11-20 17:49:13.655269] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.707 "name": "raid_bdev1", 00:14:46.707 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:46.707 "strip_size_kb": 0, 00:14:46.707 "state": "online", 00:14:46.707 "raid_level": "raid1", 00:14:46.707 "superblock": true, 00:14:46.707 "num_base_bdevs": 4, 00:14:46.707 "num_base_bdevs_discovered": 2, 00:14:46.707 "num_base_bdevs_operational": 2, 00:14:46.707 "base_bdevs_list": [ 00:14:46.707 { 00:14:46.707 "name": null, 00:14:46.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.707 "is_configured": false, 00:14:46.707 "data_offset": 0, 00:14:46.707 "data_size": 63488 00:14:46.707 }, 00:14:46.707 { 00:14:46.707 "name": null, 00:14:46.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.707 "is_configured": false, 00:14:46.707 "data_offset": 2048, 00:14:46.707 "data_size": 63488 00:14:46.707 }, 00:14:46.707 { 00:14:46.707 "name": "BaseBdev3", 00:14:46.707 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:46.707 "is_configured": true, 00:14:46.707 "data_offset": 2048, 00:14:46.707 "data_size": 63488 00:14:46.707 }, 00:14:46.707 { 00:14:46.707 "name": "BaseBdev4", 00:14:46.707 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:46.707 "is_configured": true, 00:14:46.707 "data_offset": 2048, 00:14:46.707 "data_size": 63488 00:14:46.707 } 00:14:46.707 ] 00:14:46.707 }' 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.707 17:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.301 17:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.301 17:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.301 17:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.301 [2024-11-20 17:49:14.174590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.301 [2024-11-20 17:49:14.174742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.301 [2024-11-20 17:49:14.174815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:47.301 [2024-11-20 17:49:14.174852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.301 [2024-11-20 17:49:14.175517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.301 [2024-11-20 17:49:14.175590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.301 [2024-11-20 17:49:14.175742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.301 [2024-11-20 17:49:14.175784] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:47.301 [2024-11-20 17:49:14.175832] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.301 [2024-11-20 17:49:14.175888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.301 [2024-11-20 17:49:14.191793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:47.301 spare 00:14:47.301 17:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.301 17:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:47.301 [2024-11-20 17:49:14.194143] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.240 "name": "raid_bdev1", 00:14:48.240 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:48.240 "strip_size_kb": 0, 00:14:48.240 "state": "online", 00:14:48.240 "raid_level": "raid1", 00:14:48.240 "superblock": true, 00:14:48.240 "num_base_bdevs": 4, 00:14:48.240 "num_base_bdevs_discovered": 3, 00:14:48.240 "num_base_bdevs_operational": 3, 00:14:48.240 "process": { 00:14:48.240 "type": "rebuild", 00:14:48.240 "target": "spare", 00:14:48.240 "progress": { 00:14:48.240 "blocks": 20480, 00:14:48.240 "percent": 32 00:14:48.240 } 00:14:48.240 }, 00:14:48.240 "base_bdevs_list": [ 00:14:48.240 { 00:14:48.240 "name": "spare", 00:14:48.240 "uuid": "955e89a8-ee48-5bd6-ae0e-56522d8d350c", 00:14:48.240 "is_configured": true, 00:14:48.240 "data_offset": 2048, 00:14:48.240 "data_size": 63488 00:14:48.240 }, 00:14:48.240 { 00:14:48.240 "name": null, 00:14:48.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.240 "is_configured": false, 00:14:48.240 "data_offset": 2048, 00:14:48.240 "data_size": 63488 00:14:48.240 }, 00:14:48.240 { 00:14:48.240 "name": "BaseBdev3", 00:14:48.240 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:48.240 "is_configured": true, 00:14:48.240 "data_offset": 2048, 00:14:48.240 "data_size": 63488 00:14:48.240 }, 00:14:48.240 { 00:14:48.240 "name": "BaseBdev4", 00:14:48.240 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:48.240 "is_configured": true, 00:14:48.240 "data_offset": 2048, 00:14:48.240 "data_size": 63488 00:14:48.240 } 00:14:48.240 ] 00:14:48.240 }' 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.240 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 [2024-11-20 17:49:15.341689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.240 [2024-11-20 17:49:15.403348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.240 [2024-11-20 17:49:15.403414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.240 [2024-11-20 17:49:15.403430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.240 [2024-11-20 17:49:15.403440] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.500 "name": "raid_bdev1", 00:14:48.500 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:48.500 "strip_size_kb": 0, 00:14:48.500 "state": "online", 00:14:48.500 "raid_level": "raid1", 00:14:48.500 "superblock": true, 00:14:48.500 "num_base_bdevs": 4, 00:14:48.500 "num_base_bdevs_discovered": 2, 00:14:48.500 "num_base_bdevs_operational": 2, 00:14:48.500 "base_bdevs_list": [ 00:14:48.500 { 00:14:48.500 "name": null, 00:14:48.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.500 "is_configured": false, 00:14:48.500 "data_offset": 0, 00:14:48.500 "data_size": 63488 00:14:48.500 }, 00:14:48.500 { 00:14:48.500 "name": null, 00:14:48.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.500 "is_configured": false, 00:14:48.500 "data_offset": 2048, 00:14:48.500 "data_size": 63488 00:14:48.500 }, 00:14:48.500 { 00:14:48.500 "name": "BaseBdev3", 00:14:48.500 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:48.500 "is_configured": true, 00:14:48.500 "data_offset": 2048, 00:14:48.500 "data_size": 63488 00:14:48.500 }, 00:14:48.500 { 00:14:48.500 "name": "BaseBdev4", 00:14:48.500 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:48.500 "is_configured": true, 00:14:48.500 "data_offset": 2048, 00:14:48.500 "data_size": 63488 00:14:48.500 } 00:14:48.500 ] 00:14:48.500 }' 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.500 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.759 17:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.020 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.020 "name": "raid_bdev1", 00:14:49.020 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:49.020 "strip_size_kb": 0, 00:14:49.020 "state": "online", 00:14:49.020 "raid_level": "raid1", 00:14:49.020 "superblock": true, 00:14:49.020 "num_base_bdevs": 4, 00:14:49.020 "num_base_bdevs_discovered": 2, 00:14:49.020 "num_base_bdevs_operational": 2, 00:14:49.020 "base_bdevs_list": [ 00:14:49.020 { 00:14:49.020 "name": null, 00:14:49.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.020 "is_configured": false, 00:14:49.020 "data_offset": 0, 00:14:49.020 "data_size": 63488 00:14:49.020 }, 00:14:49.020 { 00:14:49.020 "name": null, 00:14:49.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.020 "is_configured": false, 00:14:49.020 "data_offset": 2048, 00:14:49.020 "data_size": 63488 00:14:49.020 }, 00:14:49.020 { 00:14:49.020 "name": "BaseBdev3", 00:14:49.020 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:49.020 "is_configured": true, 00:14:49.020 "data_offset": 2048, 00:14:49.020 "data_size": 63488 00:14:49.020 }, 00:14:49.020 { 00:14:49.020 "name": "BaseBdev4", 00:14:49.020 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:49.020 "is_configured": true, 00:14:49.020 "data_offset": 2048, 00:14:49.020 "data_size": 63488 00:14:49.020 } 00:14:49.020 ] 00:14:49.020 }' 00:14:49.020 17:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.020 [2024-11-20 17:49:16.050937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.020 [2024-11-20 17:49:16.051024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.020 [2024-11-20 17:49:16.051050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:49.020 [2024-11-20 17:49:16.051063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.020 [2024-11-20 17:49:16.051621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.020 [2024-11-20 17:49:16.051648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.020 [2024-11-20 17:49:16.051754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:49.020 [2024-11-20 17:49:16.051776] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:49.020 [2024-11-20 17:49:16.051785] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:49.020 [2024-11-20 17:49:16.051816] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:49.020 BaseBdev1 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.020 17:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.959 "name": "raid_bdev1", 00:14:49.959 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:49.959 "strip_size_kb": 0, 00:14:49.959 "state": "online", 00:14:49.959 "raid_level": "raid1", 00:14:49.959 "superblock": true, 00:14:49.959 "num_base_bdevs": 4, 00:14:49.959 "num_base_bdevs_discovered": 2, 00:14:49.959 "num_base_bdevs_operational": 2, 00:14:49.959 "base_bdevs_list": [ 00:14:49.959 { 00:14:49.959 "name": null, 00:14:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.959 "is_configured": false, 00:14:49.959 "data_offset": 0, 00:14:49.959 "data_size": 63488 00:14:49.959 }, 00:14:49.959 { 00:14:49.959 "name": null, 00:14:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.959 "is_configured": false, 00:14:49.959 "data_offset": 2048, 00:14:49.959 "data_size": 63488 00:14:49.959 }, 00:14:49.959 { 00:14:49.959 "name": "BaseBdev3", 00:14:49.959 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:49.959 "is_configured": true, 00:14:49.959 "data_offset": 2048, 00:14:49.959 "data_size": 63488 00:14:49.959 }, 00:14:49.959 { 00:14:49.959 "name": "BaseBdev4", 00:14:49.959 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:49.959 "is_configured": true, 00:14:49.959 "data_offset": 2048, 00:14:49.959 "data_size": 63488 00:14:49.959 } 00:14:49.959 ] 00:14:49.959 }' 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.959 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.529 "name": "raid_bdev1", 00:14:50.529 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:50.529 "strip_size_kb": 0, 00:14:50.529 "state": "online", 00:14:50.529 "raid_level": "raid1", 00:14:50.529 "superblock": true, 00:14:50.529 "num_base_bdevs": 4, 00:14:50.529 "num_base_bdevs_discovered": 2, 00:14:50.529 "num_base_bdevs_operational": 2, 00:14:50.529 "base_bdevs_list": [ 00:14:50.529 { 00:14:50.529 "name": null, 00:14:50.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.529 "is_configured": false, 00:14:50.529 "data_offset": 0, 00:14:50.529 "data_size": 63488 00:14:50.529 }, 00:14:50.529 { 00:14:50.529 "name": null, 00:14:50.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.529 "is_configured": false, 00:14:50.529 "data_offset": 2048, 00:14:50.529 "data_size": 63488 00:14:50.529 }, 00:14:50.529 { 00:14:50.529 "name": "BaseBdev3", 00:14:50.529 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:50.529 "is_configured": true, 00:14:50.529 "data_offset": 2048, 00:14:50.529 "data_size": 63488 00:14:50.529 }, 00:14:50.529 { 00:14:50.529 "name": "BaseBdev4", 00:14:50.529 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:50.529 "is_configured": true, 00:14:50.529 "data_offset": 2048, 00:14:50.529 "data_size": 63488 00:14:50.529 } 00:14:50.529 ] 00:14:50.529 }' 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.529 [2024-11-20 17:49:17.648546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.529 [2024-11-20 17:49:17.648798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.529 [2024-11-20 17:49:17.648823] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:50.529 request: 00:14:50.529 { 00:14:50.529 "base_bdev": "BaseBdev1", 00:14:50.529 "raid_bdev": "raid_bdev1", 00:14:50.529 "method": "bdev_raid_add_base_bdev", 00:14:50.529 "req_id": 1 00:14:50.529 } 00:14:50.529 Got JSON-RPC error response 00:14:50.529 response: 00:14:50.529 { 00:14:50.529 "code": -22, 00:14:50.529 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:50.529 } 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.529 17:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.913 "name": "raid_bdev1", 00:14:51.913 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:51.913 "strip_size_kb": 0, 00:14:51.913 "state": "online", 00:14:51.913 "raid_level": "raid1", 00:14:51.913 "superblock": true, 00:14:51.913 "num_base_bdevs": 4, 00:14:51.913 "num_base_bdevs_discovered": 2, 00:14:51.913 "num_base_bdevs_operational": 2, 00:14:51.913 "base_bdevs_list": [ 00:14:51.913 { 00:14:51.913 "name": null, 00:14:51.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.913 "is_configured": false, 00:14:51.913 "data_offset": 0, 00:14:51.913 "data_size": 63488 00:14:51.913 }, 00:14:51.913 { 00:14:51.913 "name": null, 00:14:51.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.913 "is_configured": false, 00:14:51.913 "data_offset": 2048, 00:14:51.913 "data_size": 63488 00:14:51.913 }, 00:14:51.913 { 00:14:51.913 "name": "BaseBdev3", 00:14:51.913 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:51.913 "is_configured": true, 00:14:51.913 "data_offset": 2048, 00:14:51.913 "data_size": 63488 00:14:51.913 }, 00:14:51.913 { 00:14:51.913 "name": "BaseBdev4", 00:14:51.913 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:51.913 "is_configured": true, 00:14:51.913 "data_offset": 2048, 00:14:51.913 "data_size": 63488 00:14:51.913 } 00:14:51.913 ] 00:14:51.913 }' 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.913 17:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.174 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.175 "name": "raid_bdev1", 00:14:52.175 "uuid": "c97a1deb-a86d-47bd-8041-da37f0676abb", 00:14:52.175 "strip_size_kb": 0, 00:14:52.175 "state": "online", 00:14:52.175 "raid_level": "raid1", 00:14:52.175 "superblock": true, 00:14:52.175 "num_base_bdevs": 4, 00:14:52.175 "num_base_bdevs_discovered": 2, 00:14:52.175 "num_base_bdevs_operational": 2, 00:14:52.175 "base_bdevs_list": [ 00:14:52.175 { 00:14:52.175 "name": null, 00:14:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.175 "is_configured": false, 00:14:52.175 "data_offset": 0, 00:14:52.175 "data_size": 63488 00:14:52.175 }, 00:14:52.175 { 00:14:52.175 "name": null, 00:14:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.175 "is_configured": false, 00:14:52.175 "data_offset": 2048, 00:14:52.175 "data_size": 63488 00:14:52.175 }, 00:14:52.175 { 00:14:52.175 "name": "BaseBdev3", 00:14:52.175 "uuid": "843797a5-35a2-559e-9dca-b5dea459d13b", 00:14:52.175 "is_configured": true, 00:14:52.175 "data_offset": 2048, 00:14:52.175 "data_size": 63488 00:14:52.175 }, 00:14:52.175 { 00:14:52.175 "name": "BaseBdev4", 00:14:52.175 "uuid": "be2f21d8-b967-5aa0-8217-ff37f0b4098d", 00:14:52.175 "is_configured": true, 00:14:52.175 "data_offset": 2048, 00:14:52.175 "data_size": 63488 00:14:52.175 } 00:14:52.175 ] 00:14:52.175 }' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78431 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78431 ']' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78431 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78431 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.175 killing process with pid 78431 00:14:52.175 Received shutdown signal, test time was about 60.000000 seconds 00:14:52.175 00:14:52.175 Latency(us) 00:14:52.175 [2024-11-20T17:49:19.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.175 [2024-11-20T17:49:19.351Z] =================================================================================================================== 00:14:52.175 [2024-11-20T17:49:19.351Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78431' 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78431 00:14:52.175 [2024-11-20 17:49:19.286501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.175 17:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78431 00:14:52.175 [2024-11-20 17:49:19.286655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.175 [2024-11-20 17:49:19.286736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.175 [2024-11-20 17:49:19.286746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:52.745 [2024-11-20 17:49:19.821610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.125 00:14:54.125 real 0m25.329s 00:14:54.125 user 0m30.521s 00:14:54.125 sys 0m3.964s 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.125 ************************************ 00:14:54.125 END TEST raid_rebuild_test_sb 00:14:54.125 ************************************ 00:14:54.125 17:49:21 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:54.125 17:49:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:54.125 17:49:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.125 17:49:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.125 ************************************ 00:14:54.125 START TEST raid_rebuild_test_io 00:14:54.125 ************************************ 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79200 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79200 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79200 ']' 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.125 17:49:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.125 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:54.125 Zero copy mechanism will not be used. 00:14:54.125 [2024-11-20 17:49:21.232761] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:54.125 [2024-11-20 17:49:21.232891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79200 ] 00:14:54.385 [2024-11-20 17:49:21.412659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.385 [2024-11-20 17:49:21.546545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.645 [2024-11-20 17:49:21.775918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.645 [2024-11-20 17:49:21.775967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.904 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.904 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:54.905 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.905 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:54.905 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.905 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.164 BaseBdev1_malloc 00:14:55.164 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.164 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.164 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.164 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.164 [2024-11-20 17:49:22.124205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.164 [2024-11-20 17:49:22.124335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.165 [2024-11-20 17:49:22.124393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.165 [2024-11-20 17:49:22.124441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.165 [2024-11-20 17:49:22.127158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.165 [2024-11-20 17:49:22.127234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.165 BaseBdev1 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.165 BaseBdev2_malloc 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.165 [2024-11-20 17:49:22.186926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:55.165 [2024-11-20 17:49:22.187061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.165 [2024-11-20 17:49:22.187120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.165 [2024-11-20 17:49:22.187165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.165 [2024-11-20 17:49:22.189686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.165 [2024-11-20 17:49:22.189760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.165 BaseBdev2 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.165 BaseBdev3_malloc 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.165 [2024-11-20 17:49:22.272794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:55.165 [2024-11-20 17:49:22.272858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.165 [2024-11-20 17:49:22.272883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.165 [2024-11-20 17:49:22.272895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.165 [2024-11-20 17:49:22.275379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.165 [2024-11-20 17:49:22.275420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.165 BaseBdev3 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.165 BaseBdev4_malloc 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.165 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.165 [2024-11-20 17:49:22.334867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:55.165 [2024-11-20 17:49:22.334973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.165 [2024-11-20 17:49:22.334998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:55.165 [2024-11-20 17:49:22.335023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.165 [2024-11-20 17:49:22.337387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.165 [2024-11-20 17:49:22.337426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:55.425 BaseBdev4 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 spare_malloc 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 spare_delay 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 [2024-11-20 17:49:22.409524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.425 [2024-11-20 17:49:22.409640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.425 [2024-11-20 17:49:22.409665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:55.425 [2024-11-20 17:49:22.409678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.425 [2024-11-20 17:49:22.412093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.425 [2024-11-20 17:49:22.412130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.425 spare 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 [2024-11-20 17:49:22.421558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.425 [2024-11-20 17:49:22.423610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.425 [2024-11-20 17:49:22.423677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.425 [2024-11-20 17:49:22.423729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.425 [2024-11-20 17:49:22.423811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.425 [2024-11-20 17:49:22.423823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:55.425 [2024-11-20 17:49:22.424126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:55.425 [2024-11-20 17:49:22.424315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.425 [2024-11-20 17:49:22.424334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.425 [2024-11-20 17:49:22.424504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.425 "name": "raid_bdev1", 00:14:55.425 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:55.425 "strip_size_kb": 0, 00:14:55.425 "state": "online", 00:14:55.425 "raid_level": "raid1", 00:14:55.425 "superblock": false, 00:14:55.425 "num_base_bdevs": 4, 00:14:55.425 "num_base_bdevs_discovered": 4, 00:14:55.425 "num_base_bdevs_operational": 4, 00:14:55.425 "base_bdevs_list": [ 00:14:55.425 { 00:14:55.425 "name": "BaseBdev1", 00:14:55.425 "uuid": "2c770fdf-00ad-58f2-8b6a-6b41238c038c", 00:14:55.425 "is_configured": true, 00:14:55.425 "data_offset": 0, 00:14:55.425 "data_size": 65536 00:14:55.425 }, 00:14:55.425 { 00:14:55.425 "name": "BaseBdev2", 00:14:55.425 "uuid": "d6f52b0f-950a-51bb-994b-c0310f3004bb", 00:14:55.425 "is_configured": true, 00:14:55.425 "data_offset": 0, 00:14:55.425 "data_size": 65536 00:14:55.425 }, 00:14:55.425 { 00:14:55.425 "name": "BaseBdev3", 00:14:55.425 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:55.425 "is_configured": true, 00:14:55.425 "data_offset": 0, 00:14:55.425 "data_size": 65536 00:14:55.425 }, 00:14:55.425 { 00:14:55.425 "name": "BaseBdev4", 00:14:55.425 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:55.425 "is_configured": true, 00:14:55.425 "data_offset": 0, 00:14:55.425 "data_size": 65536 00:14:55.425 } 00:14:55.425 ] 00:14:55.425 }' 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.425 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:55.685 [2024-11-20 17:49:22.821426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:55.685 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.944 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:55.944 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.945 [2024-11-20 17:49:22.901041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.945 "name": "raid_bdev1", 00:14:55.945 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:55.945 "strip_size_kb": 0, 00:14:55.945 "state": "online", 00:14:55.945 "raid_level": "raid1", 00:14:55.945 "superblock": false, 00:14:55.945 "num_base_bdevs": 4, 00:14:55.945 "num_base_bdevs_discovered": 3, 00:14:55.945 "num_base_bdevs_operational": 3, 00:14:55.945 "base_bdevs_list": [ 00:14:55.945 { 00:14:55.945 "name": null, 00:14:55.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.945 "is_configured": false, 00:14:55.945 "data_offset": 0, 00:14:55.945 "data_size": 65536 00:14:55.945 }, 00:14:55.945 { 00:14:55.945 "name": "BaseBdev2", 00:14:55.945 "uuid": "d6f52b0f-950a-51bb-994b-c0310f3004bb", 00:14:55.945 "is_configured": true, 00:14:55.945 "data_offset": 0, 00:14:55.945 "data_size": 65536 00:14:55.945 }, 00:14:55.945 { 00:14:55.945 "name": "BaseBdev3", 00:14:55.945 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:55.945 "is_configured": true, 00:14:55.945 "data_offset": 0, 00:14:55.945 "data_size": 65536 00:14:55.945 }, 00:14:55.945 { 00:14:55.945 "name": "BaseBdev4", 00:14:55.945 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:55.945 "is_configured": true, 00:14:55.945 "data_offset": 0, 00:14:55.945 "data_size": 65536 00:14:55.945 } 00:14:55.945 ] 00:14:55.945 }' 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.945 17:49:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.945 [2024-11-20 17:49:22.981967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:55.945 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:55.945 Zero copy mechanism will not be used. 00:14:55.945 Running I/O for 60 seconds... 00:14:56.205 17:49:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:56.205 17:49:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.205 17:49:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 [2024-11-20 17:49:23.329236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.465 17:49:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.465 17:49:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:56.465 [2024-11-20 17:49:23.398374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:56.465 [2024-11-20 17:49:23.400615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.465 [2024-11-20 17:49:23.510437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.465 [2024-11-20 17:49:23.511451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.725 [2024-11-20 17:49:23.736751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.244 143.00 IOPS, 429.00 MiB/s [2024-11-20T17:49:24.420Z] 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.244 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.504 "name": "raid_bdev1", 00:14:57.504 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:57.504 "strip_size_kb": 0, 00:14:57.504 "state": "online", 00:14:57.504 "raid_level": "raid1", 00:14:57.504 "superblock": false, 00:14:57.504 "num_base_bdevs": 4, 00:14:57.504 "num_base_bdevs_discovered": 4, 00:14:57.504 "num_base_bdevs_operational": 4, 00:14:57.504 "process": { 00:14:57.504 "type": "rebuild", 00:14:57.504 "target": "spare", 00:14:57.504 "progress": { 00:14:57.504 "blocks": 12288, 00:14:57.504 "percent": 18 00:14:57.504 } 00:14:57.504 }, 00:14:57.504 "base_bdevs_list": [ 00:14:57.504 { 00:14:57.504 "name": "spare", 00:14:57.504 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:14:57.504 "is_configured": true, 00:14:57.504 "data_offset": 0, 00:14:57.504 "data_size": 65536 00:14:57.504 }, 00:14:57.504 { 00:14:57.504 "name": "BaseBdev2", 00:14:57.504 "uuid": "d6f52b0f-950a-51bb-994b-c0310f3004bb", 00:14:57.504 "is_configured": true, 00:14:57.504 "data_offset": 0, 00:14:57.504 "data_size": 65536 00:14:57.504 }, 00:14:57.504 { 00:14:57.504 "name": "BaseBdev3", 00:14:57.504 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:57.504 "is_configured": true, 00:14:57.504 "data_offset": 0, 00:14:57.504 "data_size": 65536 00:14:57.504 }, 00:14:57.504 { 00:14:57.504 "name": "BaseBdev4", 00:14:57.504 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:57.504 "is_configured": true, 00:14:57.504 "data_offset": 0, 00:14:57.504 "data_size": 65536 00:14:57.504 } 00:14:57.504 ] 00:14:57.504 }' 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.504 [2024-11-20 17:49:24.485817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.504 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.504 [2024-11-20 17:49:24.534025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:57.504 [2024-11-20 17:49:24.589766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:57.504 [2024-11-20 17:49:24.591826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:57.763 [2024-11-20 17:49:24.694104] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:57.763 [2024-11-20 17:49:24.699313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.763 [2024-11-20 17:49:24.699349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:57.763 [2024-11-20 17:49:24.699363] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:57.763 [2024-11-20 17:49:24.724098] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.763 "name": "raid_bdev1", 00:14:57.763 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:57.763 "strip_size_kb": 0, 00:14:57.763 "state": "online", 00:14:57.763 "raid_level": "raid1", 00:14:57.763 "superblock": false, 00:14:57.763 "num_base_bdevs": 4, 00:14:57.763 "num_base_bdevs_discovered": 3, 00:14:57.763 "num_base_bdevs_operational": 3, 00:14:57.763 "base_bdevs_list": [ 00:14:57.763 { 00:14:57.763 "name": null, 00:14:57.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.763 "is_configured": false, 00:14:57.763 "data_offset": 0, 00:14:57.763 "data_size": 65536 00:14:57.763 }, 00:14:57.763 { 00:14:57.763 "name": "BaseBdev2", 00:14:57.763 "uuid": "d6f52b0f-950a-51bb-994b-c0310f3004bb", 00:14:57.763 "is_configured": true, 00:14:57.763 "data_offset": 0, 00:14:57.763 "data_size": 65536 00:14:57.763 }, 00:14:57.763 { 00:14:57.763 "name": "BaseBdev3", 00:14:57.763 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:57.763 "is_configured": true, 00:14:57.763 "data_offset": 0, 00:14:57.763 "data_size": 65536 00:14:57.763 }, 00:14:57.763 { 00:14:57.763 "name": "BaseBdev4", 00:14:57.763 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:57.763 "is_configured": true, 00:14:57.763 "data_offset": 0, 00:14:57.763 "data_size": 65536 00:14:57.763 } 00:14:57.763 ] 00:14:57.763 }' 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.763 17:49:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.282 128.00 IOPS, 384.00 MiB/s [2024-11-20T17:49:25.458Z] 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.282 "name": "raid_bdev1", 00:14:58.282 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:58.282 "strip_size_kb": 0, 00:14:58.282 "state": "online", 00:14:58.282 "raid_level": "raid1", 00:14:58.282 "superblock": false, 00:14:58.282 "num_base_bdevs": 4, 00:14:58.282 "num_base_bdevs_discovered": 3, 00:14:58.282 "num_base_bdevs_operational": 3, 00:14:58.282 "base_bdevs_list": [ 00:14:58.282 { 00:14:58.282 "name": null, 00:14:58.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.282 "is_configured": false, 00:14:58.282 "data_offset": 0, 00:14:58.282 "data_size": 65536 00:14:58.282 }, 00:14:58.282 { 00:14:58.282 "name": "BaseBdev2", 00:14:58.282 "uuid": "d6f52b0f-950a-51bb-994b-c0310f3004bb", 00:14:58.282 "is_configured": true, 00:14:58.282 "data_offset": 0, 00:14:58.282 "data_size": 65536 00:14:58.282 }, 00:14:58.282 { 00:14:58.282 "name": "BaseBdev3", 00:14:58.282 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:58.282 "is_configured": true, 00:14:58.282 "data_offset": 0, 00:14:58.282 "data_size": 65536 00:14:58.282 }, 00:14:58.282 { 00:14:58.282 "name": "BaseBdev4", 00:14:58.282 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:58.282 "is_configured": true, 00:14:58.282 "data_offset": 0, 00:14:58.282 "data_size": 65536 00:14:58.282 } 00:14:58.282 ] 00:14:58.282 }' 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:58.282 17:49:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.283 17:49:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.283 [2024-11-20 17:49:25.354064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.283 17:49:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.283 17:49:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:58.283 [2024-11-20 17:49:25.408284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:58.283 [2024-11-20 17:49:25.410540] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.543 [2024-11-20 17:49:25.521656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:58.543 [2024-11-20 17:49:25.523976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:58.802 [2024-11-20 17:49:25.752092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:58.802 [2024-11-20 17:49:25.752580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.062 140.33 IOPS, 421.00 MiB/s [2024-11-20T17:49:26.238Z] [2024-11-20 17:49:26.014726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:59.322 [2024-11-20 17:49:26.240515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:59.322 [2024-11-20 17:49:26.241775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.322 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.322 "name": "raid_bdev1", 00:14:59.322 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:59.322 "strip_size_kb": 0, 00:14:59.322 "state": "online", 00:14:59.322 "raid_level": "raid1", 00:14:59.322 "superblock": false, 00:14:59.322 "num_base_bdevs": 4, 00:14:59.322 "num_base_bdevs_discovered": 4, 00:14:59.322 "num_base_bdevs_operational": 4, 00:14:59.322 "process": { 00:14:59.322 "type": "rebuild", 00:14:59.322 "target": "spare", 00:14:59.322 "progress": { 00:14:59.322 "blocks": 10240, 00:14:59.322 "percent": 15 00:14:59.322 } 00:14:59.322 }, 00:14:59.322 "base_bdevs_list": [ 00:14:59.323 { 00:14:59.323 "name": "spare", 00:14:59.323 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:14:59.323 "is_configured": true, 00:14:59.323 "data_offset": 0, 00:14:59.323 "data_size": 65536 00:14:59.323 }, 00:14:59.323 { 00:14:59.323 "name": "BaseBdev2", 00:14:59.323 "uuid": "d6f52b0f-950a-51bb-994b-c0310f3004bb", 00:14:59.323 "is_configured": true, 00:14:59.323 "data_offset": 0, 00:14:59.323 "data_size": 65536 00:14:59.323 }, 00:14:59.323 { 00:14:59.323 "name": "BaseBdev3", 00:14:59.323 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:59.323 "is_configured": true, 00:14:59.323 "data_offset": 0, 00:14:59.323 "data_size": 65536 00:14:59.323 }, 00:14:59.323 { 00:14:59.323 "name": "BaseBdev4", 00:14:59.323 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:59.323 "is_configured": true, 00:14:59.323 "data_offset": 0, 00:14:59.323 "data_size": 65536 00:14:59.323 } 00:14:59.323 ] 00:14:59.323 }' 00:14:59.323 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.583 [2024-11-20 17:49:26.565641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.583 [2024-11-20 17:49:26.586158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:59.583 [2024-11-20 17:49:26.588557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:59.583 [2024-11-20 17:49:26.691325] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:59.583 [2024-11-20 17:49:26.691420] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:59.583 [2024-11-20 17:49:26.693906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.583 "name": "raid_bdev1", 00:14:59.583 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:59.583 "strip_size_kb": 0, 00:14:59.583 "state": "online", 00:14:59.583 "raid_level": "raid1", 00:14:59.583 "superblock": false, 00:14:59.583 "num_base_bdevs": 4, 00:14:59.583 "num_base_bdevs_discovered": 3, 00:14:59.583 "num_base_bdevs_operational": 3, 00:14:59.583 "process": { 00:14:59.583 "type": "rebuild", 00:14:59.583 "target": "spare", 00:14:59.583 "progress": { 00:14:59.583 "blocks": 14336, 00:14:59.583 "percent": 21 00:14:59.583 } 00:14:59.583 }, 00:14:59.583 "base_bdevs_list": [ 00:14:59.583 { 00:14:59.583 "name": "spare", 00:14:59.583 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:14:59.583 "is_configured": true, 00:14:59.583 "data_offset": 0, 00:14:59.583 "data_size": 65536 00:14:59.583 }, 00:14:59.583 { 00:14:59.583 "name": null, 00:14:59.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.583 "is_configured": false, 00:14:59.583 "data_offset": 0, 00:14:59.583 "data_size": 65536 00:14:59.583 }, 00:14:59.583 { 00:14:59.583 "name": "BaseBdev3", 00:14:59.583 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:59.583 "is_configured": true, 00:14:59.583 "data_offset": 0, 00:14:59.583 "data_size": 65536 00:14:59.583 }, 00:14:59.583 { 00:14:59.583 "name": "BaseBdev4", 00:14:59.583 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:59.583 "is_configured": true, 00:14:59.583 "data_offset": 0, 00:14:59.583 "data_size": 65536 00:14:59.583 } 00:14:59.583 ] 00:14:59.583 }' 00:14:59.583 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.844 [2024-11-20 17:49:26.823490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.844 "name": "raid_bdev1", 00:14:59.844 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:14:59.844 "strip_size_kb": 0, 00:14:59.844 "state": "online", 00:14:59.844 "raid_level": "raid1", 00:14:59.844 "superblock": false, 00:14:59.844 "num_base_bdevs": 4, 00:14:59.844 "num_base_bdevs_discovered": 3, 00:14:59.844 "num_base_bdevs_operational": 3, 00:14:59.844 "process": { 00:14:59.844 "type": "rebuild", 00:14:59.844 "target": "spare", 00:14:59.844 "progress": { 00:14:59.844 "blocks": 16384, 00:14:59.844 "percent": 25 00:14:59.844 } 00:14:59.844 }, 00:14:59.844 "base_bdevs_list": [ 00:14:59.844 { 00:14:59.844 "name": "spare", 00:14:59.844 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:14:59.844 "is_configured": true, 00:14:59.844 "data_offset": 0, 00:14:59.844 "data_size": 65536 00:14:59.844 }, 00:14:59.844 { 00:14:59.844 "name": null, 00:14:59.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.844 "is_configured": false, 00:14:59.844 "data_offset": 0, 00:14:59.844 "data_size": 65536 00:14:59.844 }, 00:14:59.844 { 00:14:59.844 "name": "BaseBdev3", 00:14:59.844 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:14:59.844 "is_configured": true, 00:14:59.844 "data_offset": 0, 00:14:59.844 "data_size": 65536 00:14:59.844 }, 00:14:59.844 { 00:14:59.844 "name": "BaseBdev4", 00:14:59.844 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:14:59.844 "is_configured": true, 00:14:59.844 "data_offset": 0, 00:14:59.844 "data_size": 65536 00:14:59.844 } 00:14:59.844 ] 00:14:59.844 }' 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.844 17:49:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.414 123.75 IOPS, 371.25 MiB/s [2024-11-20T17:49:27.590Z] [2024-11-20 17:49:27.472421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:00.689 [2024-11-20 17:49:27.692800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:00.969 [2024-11-20 17:49:27.939095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.969 17:49:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.969 110.40 IOPS, 331.20 MiB/s [2024-11-20T17:49:28.145Z] 17:49:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.969 17:49:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.969 "name": "raid_bdev1", 00:15:00.969 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:15:00.969 "strip_size_kb": 0, 00:15:00.969 "state": "online", 00:15:00.969 "raid_level": "raid1", 00:15:00.969 "superblock": false, 00:15:00.969 "num_base_bdevs": 4, 00:15:00.969 "num_base_bdevs_discovered": 3, 00:15:00.969 "num_base_bdevs_operational": 3, 00:15:00.969 "process": { 00:15:00.969 "type": "rebuild", 00:15:00.969 "target": "spare", 00:15:00.969 "progress": { 00:15:00.969 "blocks": 32768, 00:15:00.969 "percent": 50 00:15:00.969 } 00:15:00.969 }, 00:15:00.969 "base_bdevs_list": [ 00:15:00.969 { 00:15:00.969 "name": "spare", 00:15:00.969 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:15:00.969 "is_configured": true, 00:15:00.969 "data_offset": 0, 00:15:00.969 "data_size": 65536 00:15:00.969 }, 00:15:00.969 { 00:15:00.969 "name": null, 00:15:00.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.969 "is_configured": false, 00:15:00.969 "data_offset": 0, 00:15:00.969 "data_size": 65536 00:15:00.969 }, 00:15:00.969 { 00:15:00.969 "name": "BaseBdev3", 00:15:00.969 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:15:00.969 "is_configured": true, 00:15:00.969 "data_offset": 0, 00:15:00.969 "data_size": 65536 00:15:00.969 }, 00:15:00.969 { 00:15:00.969 "name": "BaseBdev4", 00:15:00.969 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:15:00.969 "is_configured": true, 00:15:00.969 "data_offset": 0, 00:15:00.969 "data_size": 65536 00:15:00.969 } 00:15:00.969 ] 00:15:00.969 }' 00:15:00.969 17:49:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.969 [2024-11-20 17:49:28.063154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:00.969 [2024-11-20 17:49:28.063517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:00.969 17:49:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.969 17:49:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.969 17:49:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.969 17:49:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.909 [2024-11-20 17:49:28.875053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:02.168 101.67 IOPS, 305.00 MiB/s [2024-11-20T17:49:29.344Z] 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.168 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.168 "name": "raid_bdev1", 00:15:02.168 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:15:02.168 "strip_size_kb": 0, 00:15:02.168 "state": "online", 00:15:02.168 "raid_level": "raid1", 00:15:02.168 "superblock": false, 00:15:02.168 "num_base_bdevs": 4, 00:15:02.168 "num_base_bdevs_discovered": 3, 00:15:02.168 "num_base_bdevs_operational": 3, 00:15:02.168 "process": { 00:15:02.168 "type": "rebuild", 00:15:02.168 "target": "spare", 00:15:02.168 "progress": { 00:15:02.168 "blocks": 49152, 00:15:02.168 "percent": 75 00:15:02.168 } 00:15:02.168 }, 00:15:02.168 "base_bdevs_list": [ 00:15:02.168 { 00:15:02.168 "name": "spare", 00:15:02.168 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:15:02.168 "is_configured": true, 00:15:02.168 "data_offset": 0, 00:15:02.168 "data_size": 65536 00:15:02.168 }, 00:15:02.168 { 00:15:02.168 "name": null, 00:15:02.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.168 "is_configured": false, 00:15:02.168 "data_offset": 0, 00:15:02.168 "data_size": 65536 00:15:02.168 }, 00:15:02.168 { 00:15:02.168 "name": "BaseBdev3", 00:15:02.168 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:15:02.169 "is_configured": true, 00:15:02.169 "data_offset": 0, 00:15:02.169 "data_size": 65536 00:15:02.169 }, 00:15:02.169 { 00:15:02.169 "name": "BaseBdev4", 00:15:02.169 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:15:02.169 "is_configured": true, 00:15:02.169 "data_offset": 0, 00:15:02.169 "data_size": 65536 00:15:02.169 } 00:15:02.169 ] 00:15:02.169 }' 00:15:02.169 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.169 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.169 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.169 [2024-11-20 17:49:29.237691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:02.169 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.169 17:49:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.116 92.00 IOPS, 276.00 MiB/s [2024-11-20T17:49:30.292Z] [2024-11-20 17:49:30.038713] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:03.116 [2024-11-20 17:49:30.138600] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:03.116 [2024-11-20 17:49:30.143412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.116 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.116 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.116 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.117 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.376 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.376 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.376 "name": "raid_bdev1", 00:15:03.376 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:15:03.377 "strip_size_kb": 0, 00:15:03.377 "state": "online", 00:15:03.377 "raid_level": "raid1", 00:15:03.377 "superblock": false, 00:15:03.377 "num_base_bdevs": 4, 00:15:03.377 "num_base_bdevs_discovered": 3, 00:15:03.377 "num_base_bdevs_operational": 3, 00:15:03.377 "base_bdevs_list": [ 00:15:03.377 { 00:15:03.377 "name": "spare", 00:15:03.377 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:15:03.377 "is_configured": true, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 }, 00:15:03.377 { 00:15:03.377 "name": null, 00:15:03.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.377 "is_configured": false, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 }, 00:15:03.377 { 00:15:03.377 "name": "BaseBdev3", 00:15:03.377 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:15:03.377 "is_configured": true, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 }, 00:15:03.377 { 00:15:03.377 "name": "BaseBdev4", 00:15:03.377 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:15:03.377 "is_configured": true, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 } 00:15:03.377 ] 00:15:03.377 }' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.377 "name": "raid_bdev1", 00:15:03.377 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:15:03.377 "strip_size_kb": 0, 00:15:03.377 "state": "online", 00:15:03.377 "raid_level": "raid1", 00:15:03.377 "superblock": false, 00:15:03.377 "num_base_bdevs": 4, 00:15:03.377 "num_base_bdevs_discovered": 3, 00:15:03.377 "num_base_bdevs_operational": 3, 00:15:03.377 "base_bdevs_list": [ 00:15:03.377 { 00:15:03.377 "name": "spare", 00:15:03.377 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:15:03.377 "is_configured": true, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 }, 00:15:03.377 { 00:15:03.377 "name": null, 00:15:03.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.377 "is_configured": false, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 }, 00:15:03.377 { 00:15:03.377 "name": "BaseBdev3", 00:15:03.377 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:15:03.377 "is_configured": true, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 }, 00:15:03.377 { 00:15:03.377 "name": "BaseBdev4", 00:15:03.377 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:15:03.377 "is_configured": true, 00:15:03.377 "data_offset": 0, 00:15:03.377 "data_size": 65536 00:15:03.377 } 00:15:03.377 ] 00:15:03.377 }' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.377 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.637 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.637 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.637 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.637 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.637 "name": "raid_bdev1", 00:15:03.637 "uuid": "8fb84eaa-b602-4f4f-b82a-eb462e1ffdb4", 00:15:03.637 "strip_size_kb": 0, 00:15:03.637 "state": "online", 00:15:03.637 "raid_level": "raid1", 00:15:03.637 "superblock": false, 00:15:03.637 "num_base_bdevs": 4, 00:15:03.637 "num_base_bdevs_discovered": 3, 00:15:03.637 "num_base_bdevs_operational": 3, 00:15:03.637 "base_bdevs_list": [ 00:15:03.637 { 00:15:03.637 "name": "spare", 00:15:03.637 "uuid": "4b7143f8-8f29-5b00-a503-95cd4aafb263", 00:15:03.637 "is_configured": true, 00:15:03.637 "data_offset": 0, 00:15:03.637 "data_size": 65536 00:15:03.637 }, 00:15:03.637 { 00:15:03.637 "name": null, 00:15:03.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.637 "is_configured": false, 00:15:03.637 "data_offset": 0, 00:15:03.637 "data_size": 65536 00:15:03.637 }, 00:15:03.637 { 00:15:03.637 "name": "BaseBdev3", 00:15:03.637 "uuid": "08373770-024d-5aaf-9666-a44ac0f33c60", 00:15:03.637 "is_configured": true, 00:15:03.637 "data_offset": 0, 00:15:03.637 "data_size": 65536 00:15:03.637 }, 00:15:03.637 { 00:15:03.637 "name": "BaseBdev4", 00:15:03.637 "uuid": "6e136b43-3275-5073-90dc-86d9b5a41ade", 00:15:03.637 "is_configured": true, 00:15:03.637 "data_offset": 0, 00:15:03.637 "data_size": 65536 00:15:03.637 } 00:15:03.637 ] 00:15:03.637 }' 00:15:03.637 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.637 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.897 17:49:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.897 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.897 17:49:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.897 [2024-11-20 17:49:30.958183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.897 [2024-11-20 17:49:30.958288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.897 84.25 IOPS, 252.75 MiB/s 00:15:03.897 Latency(us) 00:15:03.897 [2024-11-20T17:49:31.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.897 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:03.897 raid_bdev1 : 8.05 83.98 251.95 0.00 0.00 17351.41 309.44 120883.87 00:15:03.897 [2024-11-20T17:49:31.073Z] =================================================================================================================== 00:15:03.897 [2024-11-20T17:49:31.073Z] Total : 83.98 251.95 0.00 0.00 17351.41 309.44 120883.87 00:15:03.897 [2024-11-20 17:49:31.039779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.897 [2024-11-20 17:49:31.039885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.897 [2024-11-20 17:49:31.040039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.897 [2024-11-20 17:49:31.040098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:03.897 { 00:15:03.897 "results": [ 00:15:03.897 { 00:15:03.897 "job": "raid_bdev1", 00:15:03.897 "core_mask": "0x1", 00:15:03.897 "workload": "randrw", 00:15:03.897 "percentage": 50, 00:15:03.897 "status": "finished", 00:15:03.897 "queue_depth": 2, 00:15:03.897 "io_size": 3145728, 00:15:03.897 "runtime": 8.049157, 00:15:03.897 "iops": 83.98395012049087, 00:15:03.897 "mibps": 251.95185036147262, 00:15:03.897 "io_failed": 0, 00:15:03.897 "io_timeout": 0, 00:15:03.897 "avg_latency_us": 17351.411570760447, 00:15:03.897 "min_latency_us": 309.435807860262, 00:15:03.897 "max_latency_us": 120883.87074235808 00:15:03.897 } 00:15:03.897 ], 00:15:03.897 "core_count": 1 00:15:03.897 } 00:15:03.897 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.897 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.897 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:03.897 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.897 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.897 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:04.157 /dev/nbd0 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.157 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.157 1+0 records in 00:15:04.157 1+0 records out 00:15:04.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400197 s, 10.2 MB/s 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:04.418 /dev/nbd1 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.418 1+0 records in 00:15:04.418 1+0 records out 00:15:04.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322775 s, 12.7 MB/s 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.418 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.679 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.938 17:49:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:05.198 /dev/nbd1 00:15:05.198 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:05.198 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:05.198 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:05.198 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:05.198 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.199 1+0 records in 00:15:05.199 1+0 records out 00:15:05.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205268 s, 20.0 MB/s 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.199 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.459 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79200 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79200 ']' 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79200 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79200 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79200' 00:15:05.720 killing process with pid 79200 00:15:05.720 Received shutdown signal, test time was about 9.852504 seconds 00:15:05.720 00:15:05.720 Latency(us) 00:15:05.720 [2024-11-20T17:49:32.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.720 [2024-11-20T17:49:32.896Z] =================================================================================================================== 00:15:05.720 [2024-11-20T17:49:32.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79200 00:15:05.720 [2024-11-20 17:49:32.817837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.720 17:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79200 00:15:06.289 [2024-11-20 17:49:33.259217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.682 ************************************ 00:15:07.682 END TEST raid_rebuild_test_io 00:15:07.682 ************************************ 00:15:07.682 17:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:07.682 00:15:07.682 real 0m13.414s 00:15:07.683 user 0m16.510s 00:15:07.683 sys 0m2.041s 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 17:49:34 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:07.683 17:49:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:07.683 17:49:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.683 17:49:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 ************************************ 00:15:07.683 START TEST raid_rebuild_test_sb_io 00:15:07.683 ************************************ 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79609 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79609 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79609 ']' 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.683 17:49:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:07.683 Zero copy mechanism will not be used. 00:15:07.683 [2024-11-20 17:49:34.719818] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:07.683 [2024-11-20 17:49:34.719939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79609 ] 00:15:07.942 [2024-11-20 17:49:34.896432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.942 [2024-11-20 17:49:35.038517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.204 [2024-11-20 17:49:35.277601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.204 [2024-11-20 17:49:35.277643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 BaseBdev1_malloc 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 [2024-11-20 17:49:35.606804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.464 [2024-11-20 17:49:35.606874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.464 [2024-11-20 17:49:35.606898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.464 [2024-11-20 17:49:35.606911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.464 [2024-11-20 17:49:35.609256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.464 [2024-11-20 17:49:35.609296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.464 BaseBdev1 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.464 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 BaseBdev2_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 [2024-11-20 17:49:35.669530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:08.724 [2024-11-20 17:49:35.669602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.724 [2024-11-20 17:49:35.669628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.724 [2024-11-20 17:49:35.669641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.724 [2024-11-20 17:49:35.672061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.724 [2024-11-20 17:49:35.672097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.724 BaseBdev2 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 BaseBdev3_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 [2024-11-20 17:49:35.761374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:08.724 [2024-11-20 17:49:35.761437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.724 [2024-11-20 17:49:35.761461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.724 [2024-11-20 17:49:35.761474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.724 [2024-11-20 17:49:35.763788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.724 [2024-11-20 17:49:35.763826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:08.724 BaseBdev3 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 BaseBdev4_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 [2024-11-20 17:49:35.824428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:08.724 [2024-11-20 17:49:35.824498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.724 [2024-11-20 17:49:35.824523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:08.724 [2024-11-20 17:49:35.824535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.724 [2024-11-20 17:49:35.826892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.724 [2024-11-20 17:49:35.826934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:08.724 BaseBdev4 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 spare_malloc 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 spare_delay 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.984 [2024-11-20 17:49:35.901820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.984 [2024-11-20 17:49:35.901879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.984 [2024-11-20 17:49:35.901897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:08.984 [2024-11-20 17:49:35.901909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.984 [2024-11-20 17:49:35.904589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.984 [2024-11-20 17:49:35.904626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.984 spare 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.984 [2024-11-20 17:49:35.913866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.984 [2024-11-20 17:49:35.916392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.984 [2024-11-20 17:49:35.916462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.984 [2024-11-20 17:49:35.916516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.984 [2024-11-20 17:49:35.916718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.984 [2024-11-20 17:49:35.916741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.984 [2024-11-20 17:49:35.917030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:08.984 [2024-11-20 17:49:35.917228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.984 [2024-11-20 17:49:35.917247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.984 [2024-11-20 17:49:35.917417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.984 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.985 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.985 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.985 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.985 "name": "raid_bdev1", 00:15:08.985 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:08.985 "strip_size_kb": 0, 00:15:08.985 "state": "online", 00:15:08.985 "raid_level": "raid1", 00:15:08.985 "superblock": true, 00:15:08.985 "num_base_bdevs": 4, 00:15:08.985 "num_base_bdevs_discovered": 4, 00:15:08.985 "num_base_bdevs_operational": 4, 00:15:08.985 "base_bdevs_list": [ 00:15:08.985 { 00:15:08.985 "name": "BaseBdev1", 00:15:08.985 "uuid": "e0bbc271-2d4c-54d6-8d32-209e2f440adf", 00:15:08.985 "is_configured": true, 00:15:08.985 "data_offset": 2048, 00:15:08.985 "data_size": 63488 00:15:08.985 }, 00:15:08.985 { 00:15:08.985 "name": "BaseBdev2", 00:15:08.985 "uuid": "36e8f198-ddd8-5301-97ea-9cb282ba4f5d", 00:15:08.985 "is_configured": true, 00:15:08.985 "data_offset": 2048, 00:15:08.985 "data_size": 63488 00:15:08.985 }, 00:15:08.985 { 00:15:08.985 "name": "BaseBdev3", 00:15:08.985 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:08.985 "is_configured": true, 00:15:08.985 "data_offset": 2048, 00:15:08.985 "data_size": 63488 00:15:08.985 }, 00:15:08.985 { 00:15:08.985 "name": "BaseBdev4", 00:15:08.985 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:08.985 "is_configured": true, 00:15:08.985 "data_offset": 2048, 00:15:08.985 "data_size": 63488 00:15:08.985 } 00:15:08.985 ] 00:15:08.985 }' 00:15:08.985 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.985 17:49:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.245 [2024-11-20 17:49:36.353541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.245 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.505 [2024-11-20 17:49:36.448973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.505 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.505 "name": "raid_bdev1", 00:15:09.505 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:09.505 "strip_size_kb": 0, 00:15:09.505 "state": "online", 00:15:09.505 "raid_level": "raid1", 00:15:09.505 "superblock": true, 00:15:09.505 "num_base_bdevs": 4, 00:15:09.505 "num_base_bdevs_discovered": 3, 00:15:09.505 "num_base_bdevs_operational": 3, 00:15:09.505 "base_bdevs_list": [ 00:15:09.505 { 00:15:09.505 "name": null, 00:15:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.505 "is_configured": false, 00:15:09.505 "data_offset": 0, 00:15:09.505 "data_size": 63488 00:15:09.505 }, 00:15:09.505 { 00:15:09.505 "name": "BaseBdev2", 00:15:09.505 "uuid": "36e8f198-ddd8-5301-97ea-9cb282ba4f5d", 00:15:09.505 "is_configured": true, 00:15:09.505 "data_offset": 2048, 00:15:09.506 "data_size": 63488 00:15:09.506 }, 00:15:09.506 { 00:15:09.506 "name": "BaseBdev3", 00:15:09.506 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:09.506 "is_configured": true, 00:15:09.506 "data_offset": 2048, 00:15:09.506 "data_size": 63488 00:15:09.506 }, 00:15:09.506 { 00:15:09.506 "name": "BaseBdev4", 00:15:09.506 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:09.506 "is_configured": true, 00:15:09.506 "data_offset": 2048, 00:15:09.506 "data_size": 63488 00:15:09.506 } 00:15:09.506 ] 00:15:09.506 }' 00:15:09.506 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.506 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.506 [2024-11-20 17:49:36.546181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:09.506 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.506 Zero copy mechanism will not be used. 00:15:09.506 Running I/O for 60 seconds... 00:15:09.765 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.765 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.765 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.765 [2024-11-20 17:49:36.867188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.765 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.765 17:49:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:09.765 [2024-11-20 17:49:36.936789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:09.765 [2024-11-20 17:49:36.939292] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.025 [2024-11-20 17:49:37.051163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:10.025 [2024-11-20 17:49:37.053580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:10.284 [2024-11-20 17:49:37.274906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:10.284 [2024-11-20 17:49:37.275453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:10.543 [2024-11-20 17:49:37.503062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:10.803 127.00 IOPS, 381.00 MiB/s [2024-11-20T17:49:37.979Z] [2024-11-20 17:49:37.726649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:10.803 [2024-11-20 17:49:37.727235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.803 "name": "raid_bdev1", 00:15:10.803 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:10.803 "strip_size_kb": 0, 00:15:10.803 "state": "online", 00:15:10.803 "raid_level": "raid1", 00:15:10.803 "superblock": true, 00:15:10.803 "num_base_bdevs": 4, 00:15:10.803 "num_base_bdevs_discovered": 4, 00:15:10.803 "num_base_bdevs_operational": 4, 00:15:10.803 "process": { 00:15:10.803 "type": "rebuild", 00:15:10.803 "target": "spare", 00:15:10.803 "progress": { 00:15:10.803 "blocks": 12288, 00:15:10.803 "percent": 19 00:15:10.803 } 00:15:10.803 }, 00:15:10.803 "base_bdevs_list": [ 00:15:10.803 { 00:15:10.803 "name": "spare", 00:15:10.803 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:10.803 "is_configured": true, 00:15:10.803 "data_offset": 2048, 00:15:10.803 "data_size": 63488 00:15:10.803 }, 00:15:10.803 { 00:15:10.803 "name": "BaseBdev2", 00:15:10.803 "uuid": "36e8f198-ddd8-5301-97ea-9cb282ba4f5d", 00:15:10.803 "is_configured": true, 00:15:10.803 "data_offset": 2048, 00:15:10.803 "data_size": 63488 00:15:10.803 }, 00:15:10.803 { 00:15:10.803 "name": "BaseBdev3", 00:15:10.803 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:10.803 "is_configured": true, 00:15:10.803 "data_offset": 2048, 00:15:10.803 "data_size": 63488 00:15:10.803 }, 00:15:10.803 { 00:15:10.803 "name": "BaseBdev4", 00:15:10.803 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:10.803 "is_configured": true, 00:15:10.803 "data_offset": 2048, 00:15:10.803 "data_size": 63488 00:15:10.803 } 00:15:10.803 ] 00:15:10.803 }' 00:15:10.803 17:49:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.063 [2024-11-20 17:49:37.983644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:11.063 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.063 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.063 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.063 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:11.063 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.063 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.063 [2024-11-20 17:49:38.060402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.063 [2024-11-20 17:49:38.125812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:11.063 [2024-11-20 17:49:38.127157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:11.324 [2024-11-20 17:49:38.244652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.324 [2024-11-20 17:49:38.262359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.324 [2024-11-20 17:49:38.262471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.324 [2024-11-20 17:49:38.262500] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.324 [2024-11-20 17:49:38.298311] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.324 "name": "raid_bdev1", 00:15:11.324 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:11.324 "strip_size_kb": 0, 00:15:11.324 "state": "online", 00:15:11.324 "raid_level": "raid1", 00:15:11.324 "superblock": true, 00:15:11.324 "num_base_bdevs": 4, 00:15:11.324 "num_base_bdevs_discovered": 3, 00:15:11.324 "num_base_bdevs_operational": 3, 00:15:11.324 "base_bdevs_list": [ 00:15:11.324 { 00:15:11.324 "name": null, 00:15:11.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.324 "is_configured": false, 00:15:11.324 "data_offset": 0, 00:15:11.324 "data_size": 63488 00:15:11.324 }, 00:15:11.324 { 00:15:11.324 "name": "BaseBdev2", 00:15:11.324 "uuid": "36e8f198-ddd8-5301-97ea-9cb282ba4f5d", 00:15:11.324 "is_configured": true, 00:15:11.324 "data_offset": 2048, 00:15:11.324 "data_size": 63488 00:15:11.324 }, 00:15:11.324 { 00:15:11.324 "name": "BaseBdev3", 00:15:11.324 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:11.324 "is_configured": true, 00:15:11.324 "data_offset": 2048, 00:15:11.324 "data_size": 63488 00:15:11.324 }, 00:15:11.324 { 00:15:11.324 "name": "BaseBdev4", 00:15:11.324 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:11.324 "is_configured": true, 00:15:11.324 "data_offset": 2048, 00:15:11.324 "data_size": 63488 00:15:11.324 } 00:15:11.324 ] 00:15:11.324 }' 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.324 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.584 113.50 IOPS, 340.50 MiB/s [2024-11-20T17:49:38.760Z] 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.584 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.584 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.584 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.584 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.584 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.844 "name": "raid_bdev1", 00:15:11.844 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:11.844 "strip_size_kb": 0, 00:15:11.844 "state": "online", 00:15:11.844 "raid_level": "raid1", 00:15:11.844 "superblock": true, 00:15:11.844 "num_base_bdevs": 4, 00:15:11.844 "num_base_bdevs_discovered": 3, 00:15:11.844 "num_base_bdevs_operational": 3, 00:15:11.844 "base_bdevs_list": [ 00:15:11.844 { 00:15:11.844 "name": null, 00:15:11.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.844 "is_configured": false, 00:15:11.844 "data_offset": 0, 00:15:11.844 "data_size": 63488 00:15:11.844 }, 00:15:11.844 { 00:15:11.844 "name": "BaseBdev2", 00:15:11.844 "uuid": "36e8f198-ddd8-5301-97ea-9cb282ba4f5d", 00:15:11.844 "is_configured": true, 00:15:11.844 "data_offset": 2048, 00:15:11.844 "data_size": 63488 00:15:11.844 }, 00:15:11.844 { 00:15:11.844 "name": "BaseBdev3", 00:15:11.844 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:11.844 "is_configured": true, 00:15:11.844 "data_offset": 2048, 00:15:11.844 "data_size": 63488 00:15:11.844 }, 00:15:11.844 { 00:15:11.844 "name": "BaseBdev4", 00:15:11.844 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:11.844 "is_configured": true, 00:15:11.844 "data_offset": 2048, 00:15:11.844 "data_size": 63488 00:15:11.844 } 00:15:11.844 ] 00:15:11.844 }' 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.844 [2024-11-20 17:49:38.906590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.844 17:49:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:11.844 [2024-11-20 17:49:38.972562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:11.844 [2024-11-20 17:49:38.975183] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.104 [2024-11-20 17:49:39.087701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.104 [2024-11-20 17:49:39.088687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.363 [2024-11-20 17:49:39.304637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.363 [2024-11-20 17:49:39.306013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.622 129.33 IOPS, 388.00 MiB/s [2024-11-20T17:49:39.798Z] [2024-11-20 17:49:39.672930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:12.881 [2024-11-20 17:49:39.921560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.881 17:49:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.881 "name": "raid_bdev1", 00:15:12.881 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:12.881 "strip_size_kb": 0, 00:15:12.881 "state": "online", 00:15:12.881 "raid_level": "raid1", 00:15:12.881 "superblock": true, 00:15:12.881 "num_base_bdevs": 4, 00:15:12.881 "num_base_bdevs_discovered": 4, 00:15:12.881 "num_base_bdevs_operational": 4, 00:15:12.881 "process": { 00:15:12.881 "type": "rebuild", 00:15:12.881 "target": "spare", 00:15:12.881 "progress": { 00:15:12.881 "blocks": 10240, 00:15:12.881 "percent": 16 00:15:12.881 } 00:15:12.881 }, 00:15:12.881 "base_bdevs_list": [ 00:15:12.881 { 00:15:12.881 "name": "spare", 00:15:12.881 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:12.881 "is_configured": true, 00:15:12.881 "data_offset": 2048, 00:15:12.881 "data_size": 63488 00:15:12.881 }, 00:15:12.881 { 00:15:12.881 "name": "BaseBdev2", 00:15:12.881 "uuid": "36e8f198-ddd8-5301-97ea-9cb282ba4f5d", 00:15:12.881 "is_configured": true, 00:15:12.881 "data_offset": 2048, 00:15:12.881 "data_size": 63488 00:15:12.881 }, 00:15:12.881 { 00:15:12.881 "name": "BaseBdev3", 00:15:12.881 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:12.881 "is_configured": true, 00:15:12.881 "data_offset": 2048, 00:15:12.881 "data_size": 63488 00:15:12.881 }, 00:15:12.881 { 00:15:12.881 "name": "BaseBdev4", 00:15:12.881 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:12.881 "is_configured": true, 00:15:12.881 "data_offset": 2048, 00:15:12.881 "data_size": 63488 00:15:12.881 } 00:15:12.881 ] 00:15:12.881 }' 00:15:12.881 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.881 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.881 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:13.141 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.141 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.141 [2024-11-20 17:49:40.111707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.141 [2024-11-20 17:49:40.209179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:13.401 [2024-11-20 17:49:40.414499] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:13.401 [2024-11-20 17:49:40.414543] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:13.401 [2024-11-20 17:49:40.425017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.401 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.401 "name": "raid_bdev1", 00:15:13.401 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:13.401 "strip_size_kb": 0, 00:15:13.401 "state": "online", 00:15:13.401 "raid_level": "raid1", 00:15:13.401 "superblock": true, 00:15:13.401 "num_base_bdevs": 4, 00:15:13.401 "num_base_bdevs_discovered": 3, 00:15:13.401 "num_base_bdevs_operational": 3, 00:15:13.401 "process": { 00:15:13.401 "type": "rebuild", 00:15:13.401 "target": "spare", 00:15:13.401 "progress": { 00:15:13.401 "blocks": 14336, 00:15:13.401 "percent": 22 00:15:13.401 } 00:15:13.401 }, 00:15:13.401 "base_bdevs_list": [ 00:15:13.401 { 00:15:13.401 "name": "spare", 00:15:13.401 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:13.401 "is_configured": true, 00:15:13.401 "data_offset": 2048, 00:15:13.401 "data_size": 63488 00:15:13.401 }, 00:15:13.401 { 00:15:13.401 "name": null, 00:15:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.401 "is_configured": false, 00:15:13.401 "data_offset": 0, 00:15:13.402 "data_size": 63488 00:15:13.402 }, 00:15:13.402 { 00:15:13.402 "name": "BaseBdev3", 00:15:13.402 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:13.402 "is_configured": true, 00:15:13.402 "data_offset": 2048, 00:15:13.402 "data_size": 63488 00:15:13.402 }, 00:15:13.402 { 00:15:13.402 "name": "BaseBdev4", 00:15:13.402 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:13.402 "is_configured": true, 00:15:13.402 "data_offset": 2048, 00:15:13.402 "data_size": 63488 00:15:13.402 } 00:15:13.402 ] 00:15:13.402 }' 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=509 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.402 117.25 IOPS, 351.75 MiB/s [2024-11-20T17:49:40.578Z] 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.677 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.677 "name": "raid_bdev1", 00:15:13.677 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:13.677 "strip_size_kb": 0, 00:15:13.677 "state": "online", 00:15:13.677 "raid_level": "raid1", 00:15:13.677 "superblock": true, 00:15:13.677 "num_base_bdevs": 4, 00:15:13.677 "num_base_bdevs_discovered": 3, 00:15:13.677 "num_base_bdevs_operational": 3, 00:15:13.677 "process": { 00:15:13.677 "type": "rebuild", 00:15:13.677 "target": "spare", 00:15:13.677 "progress": { 00:15:13.677 "blocks": 14336, 00:15:13.677 "percent": 22 00:15:13.677 } 00:15:13.677 }, 00:15:13.677 "base_bdevs_list": [ 00:15:13.677 { 00:15:13.677 "name": "spare", 00:15:13.677 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:13.677 "is_configured": true, 00:15:13.677 "data_offset": 2048, 00:15:13.677 "data_size": 63488 00:15:13.677 }, 00:15:13.677 { 00:15:13.677 "name": null, 00:15:13.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.677 "is_configured": false, 00:15:13.677 "data_offset": 0, 00:15:13.677 "data_size": 63488 00:15:13.677 }, 00:15:13.677 { 00:15:13.677 "name": "BaseBdev3", 00:15:13.677 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:13.677 "is_configured": true, 00:15:13.677 "data_offset": 2048, 00:15:13.677 "data_size": 63488 00:15:13.677 }, 00:15:13.677 { 00:15:13.677 "name": "BaseBdev4", 00:15:13.677 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:13.677 "is_configured": true, 00:15:13.677 "data_offset": 2048, 00:15:13.677 "data_size": 63488 00:15:13.677 } 00:15:13.677 ] 00:15:13.677 }' 00:15:13.677 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.677 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.677 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.677 [2024-11-20 17:49:40.643329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:13.677 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.677 17:49:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.950 [2024-11-20 17:49:40.847985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:13.950 [2024-11-20 17:49:40.849544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:13.950 [2024-11-20 17:49:41.092486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:14.519 [2024-11-20 17:49:41.435502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:14.519 102.80 IOPS, 308.40 MiB/s [2024-11-20T17:49:41.695Z] [2024-11-20 17:49:41.672404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.519 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.779 "name": "raid_bdev1", 00:15:14.779 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:14.779 "strip_size_kb": 0, 00:15:14.779 "state": "online", 00:15:14.779 "raid_level": "raid1", 00:15:14.779 "superblock": true, 00:15:14.779 "num_base_bdevs": 4, 00:15:14.779 "num_base_bdevs_discovered": 3, 00:15:14.779 "num_base_bdevs_operational": 3, 00:15:14.779 "process": { 00:15:14.779 "type": "rebuild", 00:15:14.779 "target": "spare", 00:15:14.779 "progress": { 00:15:14.779 "blocks": 28672, 00:15:14.779 "percent": 45 00:15:14.779 } 00:15:14.779 }, 00:15:14.779 "base_bdevs_list": [ 00:15:14.779 { 00:15:14.779 "name": "spare", 00:15:14.779 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:14.779 "is_configured": true, 00:15:14.779 "data_offset": 2048, 00:15:14.779 "data_size": 63488 00:15:14.779 }, 00:15:14.779 { 00:15:14.779 "name": null, 00:15:14.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.779 "is_configured": false, 00:15:14.779 "data_offset": 0, 00:15:14.779 "data_size": 63488 00:15:14.779 }, 00:15:14.779 { 00:15:14.779 "name": "BaseBdev3", 00:15:14.779 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:14.779 "is_configured": true, 00:15:14.779 "data_offset": 2048, 00:15:14.779 "data_size": 63488 00:15:14.779 }, 00:15:14.779 { 00:15:14.779 "name": "BaseBdev4", 00:15:14.779 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:14.779 "is_configured": true, 00:15:14.779 "data_offset": 2048, 00:15:14.779 "data_size": 63488 00:15:14.779 } 00:15:14.779 ] 00:15:14.779 }' 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.779 17:49:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.779 [2024-11-20 17:49:41.903038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:15.717 94.50 IOPS, 283.50 MiB/s [2024-11-20T17:49:42.893Z] [2024-11-20 17:49:42.680866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:15.717 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.717 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.718 "name": "raid_bdev1", 00:15:15.718 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:15.718 "strip_size_kb": 0, 00:15:15.718 "state": "online", 00:15:15.718 "raid_level": "raid1", 00:15:15.718 "superblock": true, 00:15:15.718 "num_base_bdevs": 4, 00:15:15.718 "num_base_bdevs_discovered": 3, 00:15:15.718 "num_base_bdevs_operational": 3, 00:15:15.718 "process": { 00:15:15.718 "type": "rebuild", 00:15:15.718 "target": "spare", 00:15:15.718 "progress": { 00:15:15.718 "blocks": 47104, 00:15:15.718 "percent": 74 00:15:15.718 } 00:15:15.718 }, 00:15:15.718 "base_bdevs_list": [ 00:15:15.718 { 00:15:15.718 "name": "spare", 00:15:15.718 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:15.718 "is_configured": true, 00:15:15.718 "data_offset": 2048, 00:15:15.718 "data_size": 63488 00:15:15.718 }, 00:15:15.718 { 00:15:15.718 "name": null, 00:15:15.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.718 "is_configured": false, 00:15:15.718 "data_offset": 0, 00:15:15.718 "data_size": 63488 00:15:15.718 }, 00:15:15.718 { 00:15:15.718 "name": "BaseBdev3", 00:15:15.718 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:15.718 "is_configured": true, 00:15:15.718 "data_offset": 2048, 00:15:15.718 "data_size": 63488 00:15:15.718 }, 00:15:15.718 { 00:15:15.718 "name": "BaseBdev4", 00:15:15.718 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:15.718 "is_configured": true, 00:15:15.718 "data_offset": 2048, 00:15:15.718 "data_size": 63488 00:15:15.718 } 00:15:15.718 ] 00:15:15.718 }' 00:15:15.718 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.978 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.978 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.978 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.978 17:49:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.548 86.43 IOPS, 259.29 MiB/s [2024-11-20T17:49:43.724Z] [2024-11-20 17:49:43.678134] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:16.809 [2024-11-20 17:49:43.783322] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:16.809 [2024-11-20 17:49:43.788465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.809 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.069 17:49:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.069 "name": "raid_bdev1", 00:15:17.069 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:17.069 "strip_size_kb": 0, 00:15:17.069 "state": "online", 00:15:17.069 "raid_level": "raid1", 00:15:17.069 "superblock": true, 00:15:17.069 "num_base_bdevs": 4, 00:15:17.069 "num_base_bdevs_discovered": 3, 00:15:17.069 "num_base_bdevs_operational": 3, 00:15:17.069 "base_bdevs_list": [ 00:15:17.069 { 00:15:17.069 "name": "spare", 00:15:17.069 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:17.069 "is_configured": true, 00:15:17.069 "data_offset": 2048, 00:15:17.069 "data_size": 63488 00:15:17.069 }, 00:15:17.069 { 00:15:17.069 "name": null, 00:15:17.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.069 "is_configured": false, 00:15:17.069 "data_offset": 0, 00:15:17.069 "data_size": 63488 00:15:17.069 }, 00:15:17.069 { 00:15:17.069 "name": "BaseBdev3", 00:15:17.069 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:17.069 "is_configured": true, 00:15:17.069 "data_offset": 2048, 00:15:17.069 "data_size": 63488 00:15:17.069 }, 00:15:17.069 { 00:15:17.069 "name": "BaseBdev4", 00:15:17.069 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:17.069 "is_configured": true, 00:15:17.069 "data_offset": 2048, 00:15:17.069 "data_size": 63488 00:15:17.069 } 00:15:17.069 ] 00:15:17.069 }' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.069 "name": "raid_bdev1", 00:15:17.069 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:17.069 "strip_size_kb": 0, 00:15:17.069 "state": "online", 00:15:17.069 "raid_level": "raid1", 00:15:17.069 "superblock": true, 00:15:17.069 "num_base_bdevs": 4, 00:15:17.069 "num_base_bdevs_discovered": 3, 00:15:17.069 "num_base_bdevs_operational": 3, 00:15:17.069 "base_bdevs_list": [ 00:15:17.069 { 00:15:17.069 "name": "spare", 00:15:17.069 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:17.069 "is_configured": true, 00:15:17.069 "data_offset": 2048, 00:15:17.069 "data_size": 63488 00:15:17.069 }, 00:15:17.069 { 00:15:17.069 "name": null, 00:15:17.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.069 "is_configured": false, 00:15:17.069 "data_offset": 0, 00:15:17.069 "data_size": 63488 00:15:17.069 }, 00:15:17.069 { 00:15:17.069 "name": "BaseBdev3", 00:15:17.069 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:17.069 "is_configured": true, 00:15:17.069 "data_offset": 2048, 00:15:17.069 "data_size": 63488 00:15:17.069 }, 00:15:17.069 { 00:15:17.069 "name": "BaseBdev4", 00:15:17.069 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:17.069 "is_configured": true, 00:15:17.069 "data_offset": 2048, 00:15:17.069 "data_size": 63488 00:15:17.069 } 00:15:17.069 ] 00:15:17.069 }' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.069 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.070 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.070 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.330 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.330 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.330 "name": "raid_bdev1", 00:15:17.330 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:17.330 "strip_size_kb": 0, 00:15:17.330 "state": "online", 00:15:17.330 "raid_level": "raid1", 00:15:17.330 "superblock": true, 00:15:17.330 "num_base_bdevs": 4, 00:15:17.330 "num_base_bdevs_discovered": 3, 00:15:17.330 "num_base_bdevs_operational": 3, 00:15:17.330 "base_bdevs_list": [ 00:15:17.330 { 00:15:17.330 "name": "spare", 00:15:17.330 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:17.330 "is_configured": true, 00:15:17.330 "data_offset": 2048, 00:15:17.330 "data_size": 63488 00:15:17.330 }, 00:15:17.330 { 00:15:17.330 "name": null, 00:15:17.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.330 "is_configured": false, 00:15:17.330 "data_offset": 0, 00:15:17.330 "data_size": 63488 00:15:17.330 }, 00:15:17.330 { 00:15:17.330 "name": "BaseBdev3", 00:15:17.330 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:17.330 "is_configured": true, 00:15:17.330 "data_offset": 2048, 00:15:17.330 "data_size": 63488 00:15:17.330 }, 00:15:17.330 { 00:15:17.330 "name": "BaseBdev4", 00:15:17.330 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:17.330 "is_configured": true, 00:15:17.330 "data_offset": 2048, 00:15:17.330 "data_size": 63488 00:15:17.330 } 00:15:17.330 ] 00:15:17.330 }' 00:15:17.330 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.330 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.589 79.50 IOPS, 238.50 MiB/s [2024-11-20T17:49:44.765Z] 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.589 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.589 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.589 [2024-11-20 17:49:44.706918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.589 [2024-11-20 17:49:44.707020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.849 00:15:17.849 Latency(us) 00:15:17.849 [2024-11-20T17:49:45.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.849 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:17.849 raid_bdev1 : 8.24 78.50 235.50 0.00 0.00 17645.84 357.73 122715.44 00:15:17.849 [2024-11-20T17:49:45.025Z] =================================================================================================================== 00:15:17.849 [2024-11-20T17:49:45.025Z] Total : 78.50 235.50 0.00 0.00 17645.84 357.73 122715.44 00:15:17.849 [2024-11-20 17:49:44.795666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.849 [2024-11-20 17:49:44.795757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.849 [2024-11-20 17:49:44.795880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.849 [2024-11-20 17:49:44.795968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:17.849 { 00:15:17.849 "results": [ 00:15:17.849 { 00:15:17.849 "job": "raid_bdev1", 00:15:17.849 "core_mask": "0x1", 00:15:17.849 "workload": "randrw", 00:15:17.849 "percentage": 50, 00:15:17.849 "status": "finished", 00:15:17.849 "queue_depth": 2, 00:15:17.849 "io_size": 3145728, 00:15:17.849 "runtime": 8.241993, 00:15:17.849 "iops": 78.50043066039973, 00:15:17.849 "mibps": 235.5012919811992, 00:15:17.849 "io_failed": 0, 00:15:17.849 "io_timeout": 0, 00:15:17.849 "avg_latency_us": 17645.841577181887, 00:15:17.849 "min_latency_us": 357.7292576419214, 00:15:17.849 "max_latency_us": 122715.44454148471 00:15:17.849 } 00:15:17.849 ], 00:15:17.849 "core_count": 1 00:15:17.849 } 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:17.850 17:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:18.110 /dev/nbd0 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.110 1+0 records in 00:15:18.110 1+0 records out 00:15:18.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554849 s, 7.4 MB/s 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.110 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:18.370 /dev/nbd1 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.370 1+0 records in 00:15:18.370 1+0 records out 00:15:18.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424572 s, 9.6 MB/s 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.370 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.630 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:18.889 /dev/nbd1 00:15:18.889 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.889 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.889 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:18.889 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:18.889 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.890 1+0 records in 00:15:18.890 1+0 records out 00:15:18.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238068 s, 17.2 MB/s 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.890 17:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.150 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.409 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.410 [2024-11-20 17:49:46.525412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.410 [2024-11-20 17:49:46.525474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.410 [2024-11-20 17:49:46.525506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:19.410 [2024-11-20 17:49:46.525519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.410 [2024-11-20 17:49:46.528124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.410 [2024-11-20 17:49:46.528158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.410 [2024-11-20 17:49:46.528260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:19.410 [2024-11-20 17:49:46.528321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.410 [2024-11-20 17:49:46.528467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.410 [2024-11-20 17:49:46.528567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.410 spare 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.410 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.669 [2024-11-20 17:49:46.628469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:19.669 [2024-11-20 17:49:46.628511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.669 [2024-11-20 17:49:46.628883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:19.669 [2024-11-20 17:49:46.629148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:19.669 [2024-11-20 17:49:46.629167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:19.669 [2024-11-20 17:49:46.629410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.669 "name": "raid_bdev1", 00:15:19.669 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:19.669 "strip_size_kb": 0, 00:15:19.669 "state": "online", 00:15:19.669 "raid_level": "raid1", 00:15:19.669 "superblock": true, 00:15:19.669 "num_base_bdevs": 4, 00:15:19.669 "num_base_bdevs_discovered": 3, 00:15:19.669 "num_base_bdevs_operational": 3, 00:15:19.669 "base_bdevs_list": [ 00:15:19.669 { 00:15:19.669 "name": "spare", 00:15:19.669 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:19.669 "is_configured": true, 00:15:19.669 "data_offset": 2048, 00:15:19.669 "data_size": 63488 00:15:19.669 }, 00:15:19.669 { 00:15:19.669 "name": null, 00:15:19.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.669 "is_configured": false, 00:15:19.669 "data_offset": 2048, 00:15:19.669 "data_size": 63488 00:15:19.669 }, 00:15:19.669 { 00:15:19.669 "name": "BaseBdev3", 00:15:19.669 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:19.669 "is_configured": true, 00:15:19.669 "data_offset": 2048, 00:15:19.669 "data_size": 63488 00:15:19.669 }, 00:15:19.669 { 00:15:19.669 "name": "BaseBdev4", 00:15:19.669 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:19.669 "is_configured": true, 00:15:19.669 "data_offset": 2048, 00:15:19.669 "data_size": 63488 00:15:19.669 } 00:15:19.669 ] 00:15:19.669 }' 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.669 17:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.929 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.189 "name": "raid_bdev1", 00:15:20.189 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:20.189 "strip_size_kb": 0, 00:15:20.189 "state": "online", 00:15:20.189 "raid_level": "raid1", 00:15:20.189 "superblock": true, 00:15:20.189 "num_base_bdevs": 4, 00:15:20.189 "num_base_bdevs_discovered": 3, 00:15:20.189 "num_base_bdevs_operational": 3, 00:15:20.189 "base_bdevs_list": [ 00:15:20.189 { 00:15:20.189 "name": "spare", 00:15:20.189 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:20.189 "is_configured": true, 00:15:20.189 "data_offset": 2048, 00:15:20.189 "data_size": 63488 00:15:20.189 }, 00:15:20.189 { 00:15:20.189 "name": null, 00:15:20.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.189 "is_configured": false, 00:15:20.189 "data_offset": 2048, 00:15:20.189 "data_size": 63488 00:15:20.189 }, 00:15:20.189 { 00:15:20.189 "name": "BaseBdev3", 00:15:20.189 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:20.189 "is_configured": true, 00:15:20.189 "data_offset": 2048, 00:15:20.189 "data_size": 63488 00:15:20.189 }, 00:15:20.189 { 00:15:20.189 "name": "BaseBdev4", 00:15:20.189 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:20.189 "is_configured": true, 00:15:20.189 "data_offset": 2048, 00:15:20.189 "data_size": 63488 00:15:20.189 } 00:15:20.189 ] 00:15:20.189 }' 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.189 [2024-11-20 17:49:47.253087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.189 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.189 "name": "raid_bdev1", 00:15:20.189 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:20.189 "strip_size_kb": 0, 00:15:20.189 "state": "online", 00:15:20.189 "raid_level": "raid1", 00:15:20.189 "superblock": true, 00:15:20.189 "num_base_bdevs": 4, 00:15:20.189 "num_base_bdevs_discovered": 2, 00:15:20.189 "num_base_bdevs_operational": 2, 00:15:20.189 "base_bdevs_list": [ 00:15:20.189 { 00:15:20.189 "name": null, 00:15:20.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.189 "is_configured": false, 00:15:20.189 "data_offset": 0, 00:15:20.190 "data_size": 63488 00:15:20.190 }, 00:15:20.190 { 00:15:20.190 "name": null, 00:15:20.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.190 "is_configured": false, 00:15:20.190 "data_offset": 2048, 00:15:20.190 "data_size": 63488 00:15:20.190 }, 00:15:20.190 { 00:15:20.190 "name": "BaseBdev3", 00:15:20.190 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:20.190 "is_configured": true, 00:15:20.190 "data_offset": 2048, 00:15:20.190 "data_size": 63488 00:15:20.190 }, 00:15:20.190 { 00:15:20.190 "name": "BaseBdev4", 00:15:20.190 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:20.190 "is_configured": true, 00:15:20.190 "data_offset": 2048, 00:15:20.190 "data_size": 63488 00:15:20.190 } 00:15:20.190 ] 00:15:20.190 }' 00:15:20.190 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.190 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.758 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.758 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.758 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.758 [2024-11-20 17:49:47.680656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.758 [2024-11-20 17:49:47.680978] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:20.758 [2024-11-20 17:49:47.681004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:20.758 [2024-11-20 17:49:47.681064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.758 [2024-11-20 17:49:47.696367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:20.758 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.758 17:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:20.758 [2024-11-20 17:49:47.698586] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.698 "name": "raid_bdev1", 00:15:21.698 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:21.698 "strip_size_kb": 0, 00:15:21.698 "state": "online", 00:15:21.698 "raid_level": "raid1", 00:15:21.698 "superblock": true, 00:15:21.698 "num_base_bdevs": 4, 00:15:21.698 "num_base_bdevs_discovered": 3, 00:15:21.698 "num_base_bdevs_operational": 3, 00:15:21.698 "process": { 00:15:21.698 "type": "rebuild", 00:15:21.698 "target": "spare", 00:15:21.698 "progress": { 00:15:21.698 "blocks": 20480, 00:15:21.698 "percent": 32 00:15:21.698 } 00:15:21.698 }, 00:15:21.698 "base_bdevs_list": [ 00:15:21.698 { 00:15:21.698 "name": "spare", 00:15:21.698 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:21.698 "is_configured": true, 00:15:21.698 "data_offset": 2048, 00:15:21.698 "data_size": 63488 00:15:21.698 }, 00:15:21.698 { 00:15:21.698 "name": null, 00:15:21.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.698 "is_configured": false, 00:15:21.698 "data_offset": 2048, 00:15:21.698 "data_size": 63488 00:15:21.698 }, 00:15:21.698 { 00:15:21.698 "name": "BaseBdev3", 00:15:21.698 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:21.698 "is_configured": true, 00:15:21.698 "data_offset": 2048, 00:15:21.698 "data_size": 63488 00:15:21.698 }, 00:15:21.698 { 00:15:21.698 "name": "BaseBdev4", 00:15:21.698 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:21.698 "is_configured": true, 00:15:21.698 "data_offset": 2048, 00:15:21.698 "data_size": 63488 00:15:21.698 } 00:15:21.698 ] 00:15:21.698 }' 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.698 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 [2024-11-20 17:49:48.862495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.957 [2024-11-20 17:49:48.908093] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.957 [2024-11-20 17:49:48.908168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.957 [2024-11-20 17:49:48.908185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.957 [2024-11-20 17:49:48.908195] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.957 "name": "raid_bdev1", 00:15:21.957 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:21.957 "strip_size_kb": 0, 00:15:21.957 "state": "online", 00:15:21.957 "raid_level": "raid1", 00:15:21.957 "superblock": true, 00:15:21.957 "num_base_bdevs": 4, 00:15:21.957 "num_base_bdevs_discovered": 2, 00:15:21.957 "num_base_bdevs_operational": 2, 00:15:21.957 "base_bdevs_list": [ 00:15:21.957 { 00:15:21.957 "name": null, 00:15:21.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.957 "is_configured": false, 00:15:21.957 "data_offset": 0, 00:15:21.957 "data_size": 63488 00:15:21.957 }, 00:15:21.957 { 00:15:21.957 "name": null, 00:15:21.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.957 "is_configured": false, 00:15:21.957 "data_offset": 2048, 00:15:21.957 "data_size": 63488 00:15:21.957 }, 00:15:21.957 { 00:15:21.957 "name": "BaseBdev3", 00:15:21.957 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:21.957 "is_configured": true, 00:15:21.957 "data_offset": 2048, 00:15:21.957 "data_size": 63488 00:15:21.957 }, 00:15:21.957 { 00:15:21.957 "name": "BaseBdev4", 00:15:21.957 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:21.957 "is_configured": true, 00:15:21.957 "data_offset": 2048, 00:15:21.957 "data_size": 63488 00:15:21.957 } 00:15:21.957 ] 00:15:21.957 }' 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.957 17:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.216 17:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.216 17:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.216 17:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.216 [2024-11-20 17:49:49.355900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.216 [2024-11-20 17:49:49.355991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.216 [2024-11-20 17:49:49.356038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:22.216 [2024-11-20 17:49:49.356053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.216 [2024-11-20 17:49:49.356625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.216 [2024-11-20 17:49:49.356653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.216 [2024-11-20 17:49:49.356766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:22.216 [2024-11-20 17:49:49.356783] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:22.216 [2024-11-20 17:49:49.356795] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:22.216 [2024-11-20 17:49:49.356824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.216 [2024-11-20 17:49:49.371638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:22.216 spare 00:15:22.216 17:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.216 17:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:22.216 [2024-11-20 17:49:49.373730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.595 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.595 "name": "raid_bdev1", 00:15:23.595 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:23.595 "strip_size_kb": 0, 00:15:23.595 "state": "online", 00:15:23.595 "raid_level": "raid1", 00:15:23.595 "superblock": true, 00:15:23.595 "num_base_bdevs": 4, 00:15:23.595 "num_base_bdevs_discovered": 3, 00:15:23.595 "num_base_bdevs_operational": 3, 00:15:23.595 "process": { 00:15:23.595 "type": "rebuild", 00:15:23.595 "target": "spare", 00:15:23.595 "progress": { 00:15:23.595 "blocks": 20480, 00:15:23.595 "percent": 32 00:15:23.595 } 00:15:23.595 }, 00:15:23.595 "base_bdevs_list": [ 00:15:23.595 { 00:15:23.595 "name": "spare", 00:15:23.595 "uuid": "20beb05b-9342-55fb-9ba8-1bdf9b7c2673", 00:15:23.595 "is_configured": true, 00:15:23.595 "data_offset": 2048, 00:15:23.595 "data_size": 63488 00:15:23.595 }, 00:15:23.595 { 00:15:23.595 "name": null, 00:15:23.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.595 "is_configured": false, 00:15:23.595 "data_offset": 2048, 00:15:23.595 "data_size": 63488 00:15:23.595 }, 00:15:23.595 { 00:15:23.595 "name": "BaseBdev3", 00:15:23.595 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:23.595 "is_configured": true, 00:15:23.595 "data_offset": 2048, 00:15:23.595 "data_size": 63488 00:15:23.595 }, 00:15:23.595 { 00:15:23.595 "name": "BaseBdev4", 00:15:23.595 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:23.596 "is_configured": true, 00:15:23.596 "data_offset": 2048, 00:15:23.596 "data_size": 63488 00:15:23.596 } 00:15:23.596 ] 00:15:23.596 }' 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.596 [2024-11-20 17:49:50.509477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.596 [2024-11-20 17:49:50.583014] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.596 [2024-11-20 17:49:50.583168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.596 [2024-11-20 17:49:50.583194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.596 [2024-11-20 17:49:50.583203] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.596 "name": "raid_bdev1", 00:15:23.596 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:23.596 "strip_size_kb": 0, 00:15:23.596 "state": "online", 00:15:23.596 "raid_level": "raid1", 00:15:23.596 "superblock": true, 00:15:23.596 "num_base_bdevs": 4, 00:15:23.596 "num_base_bdevs_discovered": 2, 00:15:23.596 "num_base_bdevs_operational": 2, 00:15:23.596 "base_bdevs_list": [ 00:15:23.596 { 00:15:23.596 "name": null, 00:15:23.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.596 "is_configured": false, 00:15:23.596 "data_offset": 0, 00:15:23.596 "data_size": 63488 00:15:23.596 }, 00:15:23.596 { 00:15:23.596 "name": null, 00:15:23.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.596 "is_configured": false, 00:15:23.596 "data_offset": 2048, 00:15:23.596 "data_size": 63488 00:15:23.596 }, 00:15:23.596 { 00:15:23.596 "name": "BaseBdev3", 00:15:23.596 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:23.596 "is_configured": true, 00:15:23.596 "data_offset": 2048, 00:15:23.596 "data_size": 63488 00:15:23.596 }, 00:15:23.596 { 00:15:23.596 "name": "BaseBdev4", 00:15:23.596 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:23.596 "is_configured": true, 00:15:23.596 "data_offset": 2048, 00:15:23.596 "data_size": 63488 00:15:23.596 } 00:15:23.596 ] 00:15:23.596 }' 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.596 17:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.855 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.855 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.855 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.855 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.855 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.115 "name": "raid_bdev1", 00:15:24.115 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:24.115 "strip_size_kb": 0, 00:15:24.115 "state": "online", 00:15:24.115 "raid_level": "raid1", 00:15:24.115 "superblock": true, 00:15:24.115 "num_base_bdevs": 4, 00:15:24.115 "num_base_bdevs_discovered": 2, 00:15:24.115 "num_base_bdevs_operational": 2, 00:15:24.115 "base_bdevs_list": [ 00:15:24.115 { 00:15:24.115 "name": null, 00:15:24.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.115 "is_configured": false, 00:15:24.115 "data_offset": 0, 00:15:24.115 "data_size": 63488 00:15:24.115 }, 00:15:24.115 { 00:15:24.115 "name": null, 00:15:24.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.115 "is_configured": false, 00:15:24.115 "data_offset": 2048, 00:15:24.115 "data_size": 63488 00:15:24.115 }, 00:15:24.115 { 00:15:24.115 "name": "BaseBdev3", 00:15:24.115 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:24.115 "is_configured": true, 00:15:24.115 "data_offset": 2048, 00:15:24.115 "data_size": 63488 00:15:24.115 }, 00:15:24.115 { 00:15:24.115 "name": "BaseBdev4", 00:15:24.115 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:24.115 "is_configured": true, 00:15:24.115 "data_offset": 2048, 00:15:24.115 "data_size": 63488 00:15:24.115 } 00:15:24.115 ] 00:15:24.115 }' 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.115 [2024-11-20 17:49:51.171209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:24.115 [2024-11-20 17:49:51.171282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.115 [2024-11-20 17:49:51.171306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:24.115 [2024-11-20 17:49:51.171315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.115 [2024-11-20 17:49:51.171853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.115 [2024-11-20 17:49:51.171877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:24.115 [2024-11-20 17:49:51.171973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:24.115 [2024-11-20 17:49:51.171990] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:24.115 [2024-11-20 17:49:51.172001] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:24.115 [2024-11-20 17:49:51.172025] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:24.115 BaseBdev1 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.115 17:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.053 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.313 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.313 "name": "raid_bdev1", 00:15:25.313 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:25.313 "strip_size_kb": 0, 00:15:25.313 "state": "online", 00:15:25.313 "raid_level": "raid1", 00:15:25.313 "superblock": true, 00:15:25.313 "num_base_bdevs": 4, 00:15:25.313 "num_base_bdevs_discovered": 2, 00:15:25.313 "num_base_bdevs_operational": 2, 00:15:25.313 "base_bdevs_list": [ 00:15:25.313 { 00:15:25.313 "name": null, 00:15:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.313 "is_configured": false, 00:15:25.313 "data_offset": 0, 00:15:25.313 "data_size": 63488 00:15:25.313 }, 00:15:25.313 { 00:15:25.313 "name": null, 00:15:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.313 "is_configured": false, 00:15:25.313 "data_offset": 2048, 00:15:25.313 "data_size": 63488 00:15:25.313 }, 00:15:25.313 { 00:15:25.313 "name": "BaseBdev3", 00:15:25.313 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:25.313 "is_configured": true, 00:15:25.313 "data_offset": 2048, 00:15:25.313 "data_size": 63488 00:15:25.313 }, 00:15:25.313 { 00:15:25.313 "name": "BaseBdev4", 00:15:25.313 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:25.313 "is_configured": true, 00:15:25.313 "data_offset": 2048, 00:15:25.313 "data_size": 63488 00:15:25.313 } 00:15:25.313 ] 00:15:25.313 }' 00:15:25.313 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.313 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.573 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.573 "name": "raid_bdev1", 00:15:25.573 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:25.573 "strip_size_kb": 0, 00:15:25.573 "state": "online", 00:15:25.573 "raid_level": "raid1", 00:15:25.573 "superblock": true, 00:15:25.573 "num_base_bdevs": 4, 00:15:25.573 "num_base_bdevs_discovered": 2, 00:15:25.573 "num_base_bdevs_operational": 2, 00:15:25.573 "base_bdevs_list": [ 00:15:25.573 { 00:15:25.574 "name": null, 00:15:25.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.574 "is_configured": false, 00:15:25.574 "data_offset": 0, 00:15:25.574 "data_size": 63488 00:15:25.574 }, 00:15:25.574 { 00:15:25.574 "name": null, 00:15:25.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.574 "is_configured": false, 00:15:25.574 "data_offset": 2048, 00:15:25.574 "data_size": 63488 00:15:25.574 }, 00:15:25.574 { 00:15:25.574 "name": "BaseBdev3", 00:15:25.574 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:25.574 "is_configured": true, 00:15:25.574 "data_offset": 2048, 00:15:25.574 "data_size": 63488 00:15:25.574 }, 00:15:25.574 { 00:15:25.574 "name": "BaseBdev4", 00:15:25.574 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:25.574 "is_configured": true, 00:15:25.574 "data_offset": 2048, 00:15:25.574 "data_size": 63488 00:15:25.574 } 00:15:25.574 ] 00:15:25.574 }' 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.574 [2024-11-20 17:49:52.693077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.574 [2024-11-20 17:49:52.693284] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:25.574 [2024-11-20 17:49:52.693310] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:25.574 request: 00:15:25.574 { 00:15:25.574 "base_bdev": "BaseBdev1", 00:15:25.574 "raid_bdev": "raid_bdev1", 00:15:25.574 "method": "bdev_raid_add_base_bdev", 00:15:25.574 "req_id": 1 00:15:25.574 } 00:15:25.574 Got JSON-RPC error response 00:15:25.574 response: 00:15:25.574 { 00:15:25.574 "code": -22, 00:15:25.574 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:25.574 } 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:25.574 17:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.966 "name": "raid_bdev1", 00:15:26.966 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:26.966 "strip_size_kb": 0, 00:15:26.966 "state": "online", 00:15:26.966 "raid_level": "raid1", 00:15:26.966 "superblock": true, 00:15:26.966 "num_base_bdevs": 4, 00:15:26.966 "num_base_bdevs_discovered": 2, 00:15:26.966 "num_base_bdevs_operational": 2, 00:15:26.966 "base_bdevs_list": [ 00:15:26.966 { 00:15:26.966 "name": null, 00:15:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.966 "is_configured": false, 00:15:26.966 "data_offset": 0, 00:15:26.966 "data_size": 63488 00:15:26.966 }, 00:15:26.966 { 00:15:26.966 "name": null, 00:15:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.966 "is_configured": false, 00:15:26.966 "data_offset": 2048, 00:15:26.966 "data_size": 63488 00:15:26.966 }, 00:15:26.966 { 00:15:26.966 "name": "BaseBdev3", 00:15:26.966 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:26.966 "is_configured": true, 00:15:26.966 "data_offset": 2048, 00:15:26.966 "data_size": 63488 00:15:26.966 }, 00:15:26.966 { 00:15:26.966 "name": "BaseBdev4", 00:15:26.966 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:26.966 "is_configured": true, 00:15:26.966 "data_offset": 2048, 00:15:26.966 "data_size": 63488 00:15:26.966 } 00:15:26.966 ] 00:15:26.966 }' 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.966 17:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.226 "name": "raid_bdev1", 00:15:27.226 "uuid": "9c6e1e63-67fb-480d-9904-8c6e1d406633", 00:15:27.226 "strip_size_kb": 0, 00:15:27.226 "state": "online", 00:15:27.226 "raid_level": "raid1", 00:15:27.226 "superblock": true, 00:15:27.226 "num_base_bdevs": 4, 00:15:27.226 "num_base_bdevs_discovered": 2, 00:15:27.226 "num_base_bdevs_operational": 2, 00:15:27.226 "base_bdevs_list": [ 00:15:27.226 { 00:15:27.226 "name": null, 00:15:27.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.226 "is_configured": false, 00:15:27.226 "data_offset": 0, 00:15:27.226 "data_size": 63488 00:15:27.226 }, 00:15:27.226 { 00:15:27.226 "name": null, 00:15:27.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.226 "is_configured": false, 00:15:27.226 "data_offset": 2048, 00:15:27.226 "data_size": 63488 00:15:27.226 }, 00:15:27.226 { 00:15:27.226 "name": "BaseBdev3", 00:15:27.226 "uuid": "16b83cb0-0396-5907-abe6-174eddc9ab1b", 00:15:27.226 "is_configured": true, 00:15:27.226 "data_offset": 2048, 00:15:27.226 "data_size": 63488 00:15:27.226 }, 00:15:27.226 { 00:15:27.226 "name": "BaseBdev4", 00:15:27.226 "uuid": "038e06fa-1e07-54b9-99fc-695e096798c3", 00:15:27.226 "is_configured": true, 00:15:27.226 "data_offset": 2048, 00:15:27.226 "data_size": 63488 00:15:27.226 } 00:15:27.226 ] 00:15:27.226 }' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79609 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79609 ']' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79609 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79609 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79609' 00:15:27.226 killing process with pid 79609 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79609 00:15:27.226 Received shutdown signal, test time was about 17.816075 seconds 00:15:27.226 00:15:27.226 Latency(us) 00:15:27.226 [2024-11-20T17:49:54.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.226 [2024-11-20T17:49:54.402Z] =================================================================================================================== 00:15:27.226 [2024-11-20T17:49:54.402Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.226 [2024-11-20 17:49:54.330369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.226 17:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79609 00:15:27.226 [2024-11-20 17:49:54.330529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.226 [2024-11-20 17:49:54.330612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.226 [2024-11-20 17:49:54.330626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:27.796 [2024-11-20 17:49:54.779750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.176 ************************************ 00:15:29.176 END TEST raid_rebuild_test_sb_io 00:15:29.176 ************************************ 00:15:29.176 17:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:29.176 00:15:29.176 real 0m21.423s 00:15:29.176 user 0m27.578s 00:15:29.176 sys 0m2.678s 00:15:29.176 17:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.176 17:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.176 17:49:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:29.176 17:49:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:29.176 17:49:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:29.176 17:49:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.176 17:49:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.176 ************************************ 00:15:29.176 START TEST raid5f_state_function_test 00:15:29.176 ************************************ 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80344 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80344' 00:15:29.176 Process raid pid: 80344 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80344 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80344 ']' 00:15:29.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.176 17:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.176 [2024-11-20 17:49:56.208895] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:29.176 [2024-11-20 17:49:56.209059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.435 [2024-11-20 17:49:56.383378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.435 [2024-11-20 17:49:56.523866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.694 [2024-11-20 17:49:56.766434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.694 [2024-11-20 17:49:56.766605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.954 [2024-11-20 17:49:57.041709] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.954 [2024-11-20 17:49:57.041775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.954 [2024-11-20 17:49:57.041786] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.954 [2024-11-20 17:49:57.041796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.954 [2024-11-20 17:49:57.041802] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.954 [2024-11-20 17:49:57.041811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.954 "name": "Existed_Raid", 00:15:29.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.954 "strip_size_kb": 64, 00:15:29.954 "state": "configuring", 00:15:29.954 "raid_level": "raid5f", 00:15:29.954 "superblock": false, 00:15:29.954 "num_base_bdevs": 3, 00:15:29.954 "num_base_bdevs_discovered": 0, 00:15:29.954 "num_base_bdevs_operational": 3, 00:15:29.954 "base_bdevs_list": [ 00:15:29.954 { 00:15:29.954 "name": "BaseBdev1", 00:15:29.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.954 "is_configured": false, 00:15:29.954 "data_offset": 0, 00:15:29.954 "data_size": 0 00:15:29.954 }, 00:15:29.954 { 00:15:29.954 "name": "BaseBdev2", 00:15:29.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.954 "is_configured": false, 00:15:29.954 "data_offset": 0, 00:15:29.954 "data_size": 0 00:15:29.954 }, 00:15:29.954 { 00:15:29.954 "name": "BaseBdev3", 00:15:29.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.954 "is_configured": false, 00:15:29.954 "data_offset": 0, 00:15:29.954 "data_size": 0 00:15:29.954 } 00:15:29.954 ] 00:15:29.954 }' 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.954 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.525 [2024-11-20 17:49:57.472930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.525 [2024-11-20 17:49:57.473067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.525 [2024-11-20 17:49:57.484891] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.525 [2024-11-20 17:49:57.484984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.525 [2024-11-20 17:49:57.485019] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.525 [2024-11-20 17:49:57.485042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.525 [2024-11-20 17:49:57.485059] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.525 [2024-11-20 17:49:57.485079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.525 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.525 [2024-11-20 17:49:57.538203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.526 BaseBdev1 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.526 [ 00:15:30.526 { 00:15:30.526 "name": "BaseBdev1", 00:15:30.526 "aliases": [ 00:15:30.526 "fbd0657a-d240-49d3-8d0f-af8c3588b0ee" 00:15:30.526 ], 00:15:30.526 "product_name": "Malloc disk", 00:15:30.526 "block_size": 512, 00:15:30.526 "num_blocks": 65536, 00:15:30.526 "uuid": "fbd0657a-d240-49d3-8d0f-af8c3588b0ee", 00:15:30.526 "assigned_rate_limits": { 00:15:30.526 "rw_ios_per_sec": 0, 00:15:30.526 "rw_mbytes_per_sec": 0, 00:15:30.526 "r_mbytes_per_sec": 0, 00:15:30.526 "w_mbytes_per_sec": 0 00:15:30.526 }, 00:15:30.526 "claimed": true, 00:15:30.526 "claim_type": "exclusive_write", 00:15:30.526 "zoned": false, 00:15:30.526 "supported_io_types": { 00:15:30.526 "read": true, 00:15:30.526 "write": true, 00:15:30.526 "unmap": true, 00:15:30.526 "flush": true, 00:15:30.526 "reset": true, 00:15:30.526 "nvme_admin": false, 00:15:30.526 "nvme_io": false, 00:15:30.526 "nvme_io_md": false, 00:15:30.526 "write_zeroes": true, 00:15:30.526 "zcopy": true, 00:15:30.526 "get_zone_info": false, 00:15:30.526 "zone_management": false, 00:15:30.526 "zone_append": false, 00:15:30.526 "compare": false, 00:15:30.526 "compare_and_write": false, 00:15:30.526 "abort": true, 00:15:30.526 "seek_hole": false, 00:15:30.526 "seek_data": false, 00:15:30.526 "copy": true, 00:15:30.526 "nvme_iov_md": false 00:15:30.526 }, 00:15:30.526 "memory_domains": [ 00:15:30.526 { 00:15:30.526 "dma_device_id": "system", 00:15:30.526 "dma_device_type": 1 00:15:30.526 }, 00:15:30.526 { 00:15:30.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.526 "dma_device_type": 2 00:15:30.526 } 00:15:30.526 ], 00:15:30.526 "driver_specific": {} 00:15:30.526 } 00:15:30.526 ] 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.526 "name": "Existed_Raid", 00:15:30.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.526 "strip_size_kb": 64, 00:15:30.526 "state": "configuring", 00:15:30.526 "raid_level": "raid5f", 00:15:30.526 "superblock": false, 00:15:30.526 "num_base_bdevs": 3, 00:15:30.526 "num_base_bdevs_discovered": 1, 00:15:30.526 "num_base_bdevs_operational": 3, 00:15:30.526 "base_bdevs_list": [ 00:15:30.526 { 00:15:30.526 "name": "BaseBdev1", 00:15:30.526 "uuid": "fbd0657a-d240-49d3-8d0f-af8c3588b0ee", 00:15:30.526 "is_configured": true, 00:15:30.526 "data_offset": 0, 00:15:30.526 "data_size": 65536 00:15:30.526 }, 00:15:30.526 { 00:15:30.526 "name": "BaseBdev2", 00:15:30.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.526 "is_configured": false, 00:15:30.526 "data_offset": 0, 00:15:30.526 "data_size": 0 00:15:30.526 }, 00:15:30.526 { 00:15:30.526 "name": "BaseBdev3", 00:15:30.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.526 "is_configured": false, 00:15:30.526 "data_offset": 0, 00:15:30.526 "data_size": 0 00:15:30.526 } 00:15:30.526 ] 00:15:30.526 }' 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.526 17:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 [2024-11-20 17:49:58.025419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.097 [2024-11-20 17:49:58.025480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 [2024-11-20 17:49:58.037453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.097 [2024-11-20 17:49:58.039480] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.097 [2024-11-20 17:49:58.039521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.097 [2024-11-20 17:49:58.039530] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.097 [2024-11-20 17:49:58.039538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.097 "name": "Existed_Raid", 00:15:31.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.097 "strip_size_kb": 64, 00:15:31.097 "state": "configuring", 00:15:31.097 "raid_level": "raid5f", 00:15:31.097 "superblock": false, 00:15:31.097 "num_base_bdevs": 3, 00:15:31.097 "num_base_bdevs_discovered": 1, 00:15:31.097 "num_base_bdevs_operational": 3, 00:15:31.097 "base_bdevs_list": [ 00:15:31.097 { 00:15:31.097 "name": "BaseBdev1", 00:15:31.097 "uuid": "fbd0657a-d240-49d3-8d0f-af8c3588b0ee", 00:15:31.097 "is_configured": true, 00:15:31.097 "data_offset": 0, 00:15:31.097 "data_size": 65536 00:15:31.097 }, 00:15:31.097 { 00:15:31.097 "name": "BaseBdev2", 00:15:31.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.097 "is_configured": false, 00:15:31.097 "data_offset": 0, 00:15:31.097 "data_size": 0 00:15:31.097 }, 00:15:31.097 { 00:15:31.097 "name": "BaseBdev3", 00:15:31.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.097 "is_configured": false, 00:15:31.097 "data_offset": 0, 00:15:31.097 "data_size": 0 00:15:31.097 } 00:15:31.097 ] 00:15:31.097 }' 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.097 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.357 [2024-11-20 17:49:58.515656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.357 BaseBdev2 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.357 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.617 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.617 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.617 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.617 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.617 [ 00:15:31.617 { 00:15:31.617 "name": "BaseBdev2", 00:15:31.617 "aliases": [ 00:15:31.617 "a0fb88ae-e30a-4d39-a4a2-be517cad4a62" 00:15:31.617 ], 00:15:31.617 "product_name": "Malloc disk", 00:15:31.617 "block_size": 512, 00:15:31.618 "num_blocks": 65536, 00:15:31.618 "uuid": "a0fb88ae-e30a-4d39-a4a2-be517cad4a62", 00:15:31.618 "assigned_rate_limits": { 00:15:31.618 "rw_ios_per_sec": 0, 00:15:31.618 "rw_mbytes_per_sec": 0, 00:15:31.618 "r_mbytes_per_sec": 0, 00:15:31.618 "w_mbytes_per_sec": 0 00:15:31.618 }, 00:15:31.618 "claimed": true, 00:15:31.618 "claim_type": "exclusive_write", 00:15:31.618 "zoned": false, 00:15:31.618 "supported_io_types": { 00:15:31.618 "read": true, 00:15:31.618 "write": true, 00:15:31.618 "unmap": true, 00:15:31.618 "flush": true, 00:15:31.618 "reset": true, 00:15:31.618 "nvme_admin": false, 00:15:31.618 "nvme_io": false, 00:15:31.618 "nvme_io_md": false, 00:15:31.618 "write_zeroes": true, 00:15:31.618 "zcopy": true, 00:15:31.618 "get_zone_info": false, 00:15:31.618 "zone_management": false, 00:15:31.618 "zone_append": false, 00:15:31.618 "compare": false, 00:15:31.618 "compare_and_write": false, 00:15:31.618 "abort": true, 00:15:31.618 "seek_hole": false, 00:15:31.618 "seek_data": false, 00:15:31.618 "copy": true, 00:15:31.618 "nvme_iov_md": false 00:15:31.618 }, 00:15:31.618 "memory_domains": [ 00:15:31.618 { 00:15:31.618 "dma_device_id": "system", 00:15:31.618 "dma_device_type": 1 00:15:31.618 }, 00:15:31.618 { 00:15:31.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.618 "dma_device_type": 2 00:15:31.618 } 00:15:31.618 ], 00:15:31.618 "driver_specific": {} 00:15:31.618 } 00:15:31.618 ] 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.618 "name": "Existed_Raid", 00:15:31.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.618 "strip_size_kb": 64, 00:15:31.618 "state": "configuring", 00:15:31.618 "raid_level": "raid5f", 00:15:31.618 "superblock": false, 00:15:31.618 "num_base_bdevs": 3, 00:15:31.618 "num_base_bdevs_discovered": 2, 00:15:31.618 "num_base_bdevs_operational": 3, 00:15:31.618 "base_bdevs_list": [ 00:15:31.618 { 00:15:31.618 "name": "BaseBdev1", 00:15:31.618 "uuid": "fbd0657a-d240-49d3-8d0f-af8c3588b0ee", 00:15:31.618 "is_configured": true, 00:15:31.618 "data_offset": 0, 00:15:31.618 "data_size": 65536 00:15:31.618 }, 00:15:31.618 { 00:15:31.618 "name": "BaseBdev2", 00:15:31.618 "uuid": "a0fb88ae-e30a-4d39-a4a2-be517cad4a62", 00:15:31.618 "is_configured": true, 00:15:31.618 "data_offset": 0, 00:15:31.618 "data_size": 65536 00:15:31.618 }, 00:15:31.618 { 00:15:31.618 "name": "BaseBdev3", 00:15:31.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.618 "is_configured": false, 00:15:31.618 "data_offset": 0, 00:15:31.618 "data_size": 0 00:15:31.618 } 00:15:31.618 ] 00:15:31.618 }' 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.618 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.879 17:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.879 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.879 17:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.879 [2024-11-20 17:49:58.994258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.879 [2024-11-20 17:49:58.994325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:31.879 [2024-11-20 17:49:58.994340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:31.879 [2024-11-20 17:49:58.994624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:31.879 [2024-11-20 17:49:58.999878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:31.879 [2024-11-20 17:49:58.999900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:31.879 [2024-11-20 17:49:59.000299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.879 BaseBdev3 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.879 [ 00:15:31.879 { 00:15:31.879 "name": "BaseBdev3", 00:15:31.879 "aliases": [ 00:15:31.879 "3a89aaa5-188e-44a9-9a41-56a11dc5757e" 00:15:31.879 ], 00:15:31.879 "product_name": "Malloc disk", 00:15:31.879 "block_size": 512, 00:15:31.879 "num_blocks": 65536, 00:15:31.879 "uuid": "3a89aaa5-188e-44a9-9a41-56a11dc5757e", 00:15:31.879 "assigned_rate_limits": { 00:15:31.879 "rw_ios_per_sec": 0, 00:15:31.879 "rw_mbytes_per_sec": 0, 00:15:31.879 "r_mbytes_per_sec": 0, 00:15:31.879 "w_mbytes_per_sec": 0 00:15:31.879 }, 00:15:31.879 "claimed": true, 00:15:31.879 "claim_type": "exclusive_write", 00:15:31.879 "zoned": false, 00:15:31.879 "supported_io_types": { 00:15:31.879 "read": true, 00:15:31.879 "write": true, 00:15:31.879 "unmap": true, 00:15:31.879 "flush": true, 00:15:31.879 "reset": true, 00:15:31.879 "nvme_admin": false, 00:15:31.879 "nvme_io": false, 00:15:31.879 "nvme_io_md": false, 00:15:31.879 "write_zeroes": true, 00:15:31.879 "zcopy": true, 00:15:31.879 "get_zone_info": false, 00:15:31.879 "zone_management": false, 00:15:31.879 "zone_append": false, 00:15:31.879 "compare": false, 00:15:31.879 "compare_and_write": false, 00:15:31.879 "abort": true, 00:15:31.879 "seek_hole": false, 00:15:31.879 "seek_data": false, 00:15:31.879 "copy": true, 00:15:31.879 "nvme_iov_md": false 00:15:31.879 }, 00:15:31.879 "memory_domains": [ 00:15:31.879 { 00:15:31.879 "dma_device_id": "system", 00:15:31.879 "dma_device_type": 1 00:15:31.879 }, 00:15:31.879 { 00:15:31.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.879 "dma_device_type": 2 00:15:31.879 } 00:15:31.879 ], 00:15:31.879 "driver_specific": {} 00:15:31.879 } 00:15:31.879 ] 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.879 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.146 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.146 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.146 "name": "Existed_Raid", 00:15:32.146 "uuid": "732fdbba-25a0-4e3d-8173-16181e539d2d", 00:15:32.146 "strip_size_kb": 64, 00:15:32.146 "state": "online", 00:15:32.146 "raid_level": "raid5f", 00:15:32.146 "superblock": false, 00:15:32.146 "num_base_bdevs": 3, 00:15:32.146 "num_base_bdevs_discovered": 3, 00:15:32.146 "num_base_bdevs_operational": 3, 00:15:32.146 "base_bdevs_list": [ 00:15:32.146 { 00:15:32.146 "name": "BaseBdev1", 00:15:32.146 "uuid": "fbd0657a-d240-49d3-8d0f-af8c3588b0ee", 00:15:32.146 "is_configured": true, 00:15:32.146 "data_offset": 0, 00:15:32.146 "data_size": 65536 00:15:32.146 }, 00:15:32.146 { 00:15:32.146 "name": "BaseBdev2", 00:15:32.146 "uuid": "a0fb88ae-e30a-4d39-a4a2-be517cad4a62", 00:15:32.146 "is_configured": true, 00:15:32.146 "data_offset": 0, 00:15:32.146 "data_size": 65536 00:15:32.146 }, 00:15:32.146 { 00:15:32.146 "name": "BaseBdev3", 00:15:32.146 "uuid": "3a89aaa5-188e-44a9-9a41-56a11dc5757e", 00:15:32.146 "is_configured": true, 00:15:32.146 "data_offset": 0, 00:15:32.146 "data_size": 65536 00:15:32.146 } 00:15:32.146 ] 00:15:32.146 }' 00:15:32.146 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.146 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.410 [2024-11-20 17:49:59.470424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.410 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.410 "name": "Existed_Raid", 00:15:32.410 "aliases": [ 00:15:32.410 "732fdbba-25a0-4e3d-8173-16181e539d2d" 00:15:32.410 ], 00:15:32.410 "product_name": "Raid Volume", 00:15:32.410 "block_size": 512, 00:15:32.410 "num_blocks": 131072, 00:15:32.410 "uuid": "732fdbba-25a0-4e3d-8173-16181e539d2d", 00:15:32.410 "assigned_rate_limits": { 00:15:32.410 "rw_ios_per_sec": 0, 00:15:32.410 "rw_mbytes_per_sec": 0, 00:15:32.410 "r_mbytes_per_sec": 0, 00:15:32.410 "w_mbytes_per_sec": 0 00:15:32.410 }, 00:15:32.410 "claimed": false, 00:15:32.410 "zoned": false, 00:15:32.410 "supported_io_types": { 00:15:32.410 "read": true, 00:15:32.410 "write": true, 00:15:32.410 "unmap": false, 00:15:32.410 "flush": false, 00:15:32.410 "reset": true, 00:15:32.410 "nvme_admin": false, 00:15:32.410 "nvme_io": false, 00:15:32.410 "nvme_io_md": false, 00:15:32.410 "write_zeroes": true, 00:15:32.411 "zcopy": false, 00:15:32.411 "get_zone_info": false, 00:15:32.411 "zone_management": false, 00:15:32.411 "zone_append": false, 00:15:32.411 "compare": false, 00:15:32.411 "compare_and_write": false, 00:15:32.411 "abort": false, 00:15:32.411 "seek_hole": false, 00:15:32.411 "seek_data": false, 00:15:32.411 "copy": false, 00:15:32.411 "nvme_iov_md": false 00:15:32.411 }, 00:15:32.411 "driver_specific": { 00:15:32.411 "raid": { 00:15:32.411 "uuid": "732fdbba-25a0-4e3d-8173-16181e539d2d", 00:15:32.411 "strip_size_kb": 64, 00:15:32.411 "state": "online", 00:15:32.411 "raid_level": "raid5f", 00:15:32.411 "superblock": false, 00:15:32.411 "num_base_bdevs": 3, 00:15:32.411 "num_base_bdevs_discovered": 3, 00:15:32.411 "num_base_bdevs_operational": 3, 00:15:32.411 "base_bdevs_list": [ 00:15:32.411 { 00:15:32.411 "name": "BaseBdev1", 00:15:32.411 "uuid": "fbd0657a-d240-49d3-8d0f-af8c3588b0ee", 00:15:32.411 "is_configured": true, 00:15:32.411 "data_offset": 0, 00:15:32.411 "data_size": 65536 00:15:32.411 }, 00:15:32.411 { 00:15:32.411 "name": "BaseBdev2", 00:15:32.411 "uuid": "a0fb88ae-e30a-4d39-a4a2-be517cad4a62", 00:15:32.411 "is_configured": true, 00:15:32.411 "data_offset": 0, 00:15:32.411 "data_size": 65536 00:15:32.411 }, 00:15:32.411 { 00:15:32.411 "name": "BaseBdev3", 00:15:32.411 "uuid": "3a89aaa5-188e-44a9-9a41-56a11dc5757e", 00:15:32.411 "is_configured": true, 00:15:32.411 "data_offset": 0, 00:15:32.411 "data_size": 65536 00:15:32.411 } 00:15:32.411 ] 00:15:32.411 } 00:15:32.411 } 00:15:32.411 }' 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:32.411 BaseBdev2 00:15:32.411 BaseBdev3' 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.411 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 [2024-11-20 17:49:59.733793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:32.671 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.931 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.932 "name": "Existed_Raid", 00:15:32.932 "uuid": "732fdbba-25a0-4e3d-8173-16181e539d2d", 00:15:32.932 "strip_size_kb": 64, 00:15:32.932 "state": "online", 00:15:32.932 "raid_level": "raid5f", 00:15:32.932 "superblock": false, 00:15:32.932 "num_base_bdevs": 3, 00:15:32.932 "num_base_bdevs_discovered": 2, 00:15:32.932 "num_base_bdevs_operational": 2, 00:15:32.932 "base_bdevs_list": [ 00:15:32.932 { 00:15:32.932 "name": null, 00:15:32.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.932 "is_configured": false, 00:15:32.932 "data_offset": 0, 00:15:32.932 "data_size": 65536 00:15:32.932 }, 00:15:32.932 { 00:15:32.932 "name": "BaseBdev2", 00:15:32.932 "uuid": "a0fb88ae-e30a-4d39-a4a2-be517cad4a62", 00:15:32.932 "is_configured": true, 00:15:32.932 "data_offset": 0, 00:15:32.932 "data_size": 65536 00:15:32.932 }, 00:15:32.932 { 00:15:32.932 "name": "BaseBdev3", 00:15:32.932 "uuid": "3a89aaa5-188e-44a9-9a41-56a11dc5757e", 00:15:32.932 "is_configured": true, 00:15:32.932 "data_offset": 0, 00:15:32.932 "data_size": 65536 00:15:32.932 } 00:15:32.932 ] 00:15:32.932 }' 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.932 17:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:33.191 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.192 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.192 [2024-11-20 17:50:00.310314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.192 [2024-11-20 17:50:00.310503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.452 [2024-11-20 17:50:00.413553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.452 [2024-11-20 17:50:00.473467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:33.452 [2024-11-20 17:50:00.473558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.452 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 BaseBdev2 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 [ 00:15:33.714 { 00:15:33.714 "name": "BaseBdev2", 00:15:33.714 "aliases": [ 00:15:33.714 "9df50f10-7975-4f3d-a638-379fa0597dbd" 00:15:33.714 ], 00:15:33.714 "product_name": "Malloc disk", 00:15:33.714 "block_size": 512, 00:15:33.714 "num_blocks": 65536, 00:15:33.714 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:33.714 "assigned_rate_limits": { 00:15:33.714 "rw_ios_per_sec": 0, 00:15:33.714 "rw_mbytes_per_sec": 0, 00:15:33.714 "r_mbytes_per_sec": 0, 00:15:33.714 "w_mbytes_per_sec": 0 00:15:33.714 }, 00:15:33.714 "claimed": false, 00:15:33.714 "zoned": false, 00:15:33.714 "supported_io_types": { 00:15:33.714 "read": true, 00:15:33.714 "write": true, 00:15:33.714 "unmap": true, 00:15:33.714 "flush": true, 00:15:33.714 "reset": true, 00:15:33.714 "nvme_admin": false, 00:15:33.714 "nvme_io": false, 00:15:33.714 "nvme_io_md": false, 00:15:33.714 "write_zeroes": true, 00:15:33.714 "zcopy": true, 00:15:33.714 "get_zone_info": false, 00:15:33.714 "zone_management": false, 00:15:33.714 "zone_append": false, 00:15:33.714 "compare": false, 00:15:33.714 "compare_and_write": false, 00:15:33.714 "abort": true, 00:15:33.714 "seek_hole": false, 00:15:33.714 "seek_data": false, 00:15:33.714 "copy": true, 00:15:33.714 "nvme_iov_md": false 00:15:33.714 }, 00:15:33.714 "memory_domains": [ 00:15:33.714 { 00:15:33.714 "dma_device_id": "system", 00:15:33.714 "dma_device_type": 1 00:15:33.714 }, 00:15:33.714 { 00:15:33.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.714 "dma_device_type": 2 00:15:33.714 } 00:15:33.714 ], 00:15:33.714 "driver_specific": {} 00:15:33.714 } 00:15:33.714 ] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 BaseBdev3 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 [ 00:15:33.714 { 00:15:33.715 "name": "BaseBdev3", 00:15:33.715 "aliases": [ 00:15:33.715 "1848334b-7ff9-4adc-be4b-4ef696c62512" 00:15:33.715 ], 00:15:33.715 "product_name": "Malloc disk", 00:15:33.715 "block_size": 512, 00:15:33.715 "num_blocks": 65536, 00:15:33.715 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:33.715 "assigned_rate_limits": { 00:15:33.715 "rw_ios_per_sec": 0, 00:15:33.715 "rw_mbytes_per_sec": 0, 00:15:33.715 "r_mbytes_per_sec": 0, 00:15:33.715 "w_mbytes_per_sec": 0 00:15:33.715 }, 00:15:33.715 "claimed": false, 00:15:33.715 "zoned": false, 00:15:33.715 "supported_io_types": { 00:15:33.715 "read": true, 00:15:33.715 "write": true, 00:15:33.715 "unmap": true, 00:15:33.715 "flush": true, 00:15:33.715 "reset": true, 00:15:33.715 "nvme_admin": false, 00:15:33.715 "nvme_io": false, 00:15:33.715 "nvme_io_md": false, 00:15:33.715 "write_zeroes": true, 00:15:33.715 "zcopy": true, 00:15:33.715 "get_zone_info": false, 00:15:33.715 "zone_management": false, 00:15:33.715 "zone_append": false, 00:15:33.715 "compare": false, 00:15:33.715 "compare_and_write": false, 00:15:33.715 "abort": true, 00:15:33.715 "seek_hole": false, 00:15:33.715 "seek_data": false, 00:15:33.715 "copy": true, 00:15:33.715 "nvme_iov_md": false 00:15:33.715 }, 00:15:33.715 "memory_domains": [ 00:15:33.715 { 00:15:33.715 "dma_device_id": "system", 00:15:33.715 "dma_device_type": 1 00:15:33.715 }, 00:15:33.715 { 00:15:33.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.715 "dma_device_type": 2 00:15:33.715 } 00:15:33.715 ], 00:15:33.715 "driver_specific": {} 00:15:33.715 } 00:15:33.715 ] 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.715 [2024-11-20 17:50:00.801973] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.715 [2024-11-20 17:50:00.802034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.715 [2024-11-20 17:50:00.802058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.715 [2024-11-20 17:50:00.803997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.715 "name": "Existed_Raid", 00:15:33.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.715 "strip_size_kb": 64, 00:15:33.715 "state": "configuring", 00:15:33.715 "raid_level": "raid5f", 00:15:33.715 "superblock": false, 00:15:33.715 "num_base_bdevs": 3, 00:15:33.715 "num_base_bdevs_discovered": 2, 00:15:33.715 "num_base_bdevs_operational": 3, 00:15:33.715 "base_bdevs_list": [ 00:15:33.715 { 00:15:33.715 "name": "BaseBdev1", 00:15:33.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.715 "is_configured": false, 00:15:33.715 "data_offset": 0, 00:15:33.715 "data_size": 0 00:15:33.715 }, 00:15:33.715 { 00:15:33.715 "name": "BaseBdev2", 00:15:33.715 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:33.715 "is_configured": true, 00:15:33.715 "data_offset": 0, 00:15:33.715 "data_size": 65536 00:15:33.715 }, 00:15:33.715 { 00:15:33.715 "name": "BaseBdev3", 00:15:33.715 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:33.715 "is_configured": true, 00:15:33.715 "data_offset": 0, 00:15:33.715 "data_size": 65536 00:15:33.715 } 00:15:33.715 ] 00:15:33.715 }' 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.715 17:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.285 [2024-11-20 17:50:01.213335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.285 "name": "Existed_Raid", 00:15:34.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.285 "strip_size_kb": 64, 00:15:34.285 "state": "configuring", 00:15:34.285 "raid_level": "raid5f", 00:15:34.285 "superblock": false, 00:15:34.285 "num_base_bdevs": 3, 00:15:34.285 "num_base_bdevs_discovered": 1, 00:15:34.285 "num_base_bdevs_operational": 3, 00:15:34.285 "base_bdevs_list": [ 00:15:34.285 { 00:15:34.285 "name": "BaseBdev1", 00:15:34.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.285 "is_configured": false, 00:15:34.285 "data_offset": 0, 00:15:34.285 "data_size": 0 00:15:34.285 }, 00:15:34.285 { 00:15:34.285 "name": null, 00:15:34.285 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:34.285 "is_configured": false, 00:15:34.285 "data_offset": 0, 00:15:34.285 "data_size": 65536 00:15:34.285 }, 00:15:34.285 { 00:15:34.285 "name": "BaseBdev3", 00:15:34.285 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:34.285 "is_configured": true, 00:15:34.285 "data_offset": 0, 00:15:34.285 "data_size": 65536 00:15:34.285 } 00:15:34.285 ] 00:15:34.285 }' 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.285 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.545 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.805 [2024-11-20 17:50:01.723153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.805 BaseBdev1 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.805 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.805 [ 00:15:34.805 { 00:15:34.806 "name": "BaseBdev1", 00:15:34.806 "aliases": [ 00:15:34.806 "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3" 00:15:34.806 ], 00:15:34.806 "product_name": "Malloc disk", 00:15:34.806 "block_size": 512, 00:15:34.806 "num_blocks": 65536, 00:15:34.806 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:34.806 "assigned_rate_limits": { 00:15:34.806 "rw_ios_per_sec": 0, 00:15:34.806 "rw_mbytes_per_sec": 0, 00:15:34.806 "r_mbytes_per_sec": 0, 00:15:34.806 "w_mbytes_per_sec": 0 00:15:34.806 }, 00:15:34.806 "claimed": true, 00:15:34.806 "claim_type": "exclusive_write", 00:15:34.806 "zoned": false, 00:15:34.806 "supported_io_types": { 00:15:34.806 "read": true, 00:15:34.806 "write": true, 00:15:34.806 "unmap": true, 00:15:34.806 "flush": true, 00:15:34.806 "reset": true, 00:15:34.806 "nvme_admin": false, 00:15:34.806 "nvme_io": false, 00:15:34.806 "nvme_io_md": false, 00:15:34.806 "write_zeroes": true, 00:15:34.806 "zcopy": true, 00:15:34.806 "get_zone_info": false, 00:15:34.806 "zone_management": false, 00:15:34.806 "zone_append": false, 00:15:34.806 "compare": false, 00:15:34.806 "compare_and_write": false, 00:15:34.806 "abort": true, 00:15:34.806 "seek_hole": false, 00:15:34.806 "seek_data": false, 00:15:34.806 "copy": true, 00:15:34.806 "nvme_iov_md": false 00:15:34.806 }, 00:15:34.806 "memory_domains": [ 00:15:34.806 { 00:15:34.806 "dma_device_id": "system", 00:15:34.806 "dma_device_type": 1 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.806 "dma_device_type": 2 00:15:34.806 } 00:15:34.806 ], 00:15:34.806 "driver_specific": {} 00:15:34.806 } 00:15:34.806 ] 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.806 "name": "Existed_Raid", 00:15:34.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.806 "strip_size_kb": 64, 00:15:34.806 "state": "configuring", 00:15:34.806 "raid_level": "raid5f", 00:15:34.806 "superblock": false, 00:15:34.806 "num_base_bdevs": 3, 00:15:34.806 "num_base_bdevs_discovered": 2, 00:15:34.806 "num_base_bdevs_operational": 3, 00:15:34.806 "base_bdevs_list": [ 00:15:34.806 { 00:15:34.806 "name": "BaseBdev1", 00:15:34.806 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 0, 00:15:34.806 "data_size": 65536 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "name": null, 00:15:34.806 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:34.806 "is_configured": false, 00:15:34.806 "data_offset": 0, 00:15:34.806 "data_size": 65536 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "name": "BaseBdev3", 00:15:34.806 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 0, 00:15:34.806 "data_size": 65536 00:15:34.806 } 00:15:34.806 ] 00:15:34.806 }' 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.806 17:50:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 [2024-11-20 17:50:02.170504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.066 "name": "Existed_Raid", 00:15:35.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.066 "strip_size_kb": 64, 00:15:35.066 "state": "configuring", 00:15:35.066 "raid_level": "raid5f", 00:15:35.066 "superblock": false, 00:15:35.066 "num_base_bdevs": 3, 00:15:35.066 "num_base_bdevs_discovered": 1, 00:15:35.066 "num_base_bdevs_operational": 3, 00:15:35.066 "base_bdevs_list": [ 00:15:35.066 { 00:15:35.066 "name": "BaseBdev1", 00:15:35.066 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:35.066 "is_configured": true, 00:15:35.066 "data_offset": 0, 00:15:35.066 "data_size": 65536 00:15:35.066 }, 00:15:35.066 { 00:15:35.066 "name": null, 00:15:35.066 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:35.066 "is_configured": false, 00:15:35.066 "data_offset": 0, 00:15:35.066 "data_size": 65536 00:15:35.066 }, 00:15:35.066 { 00:15:35.066 "name": null, 00:15:35.066 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:35.066 "is_configured": false, 00:15:35.066 "data_offset": 0, 00:15:35.066 "data_size": 65536 00:15:35.066 } 00:15:35.066 ] 00:15:35.066 }' 00:15:35.067 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.067 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.637 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.637 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.637 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.638 [2024-11-20 17:50:02.657729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.638 "name": "Existed_Raid", 00:15:35.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.638 "strip_size_kb": 64, 00:15:35.638 "state": "configuring", 00:15:35.638 "raid_level": "raid5f", 00:15:35.638 "superblock": false, 00:15:35.638 "num_base_bdevs": 3, 00:15:35.638 "num_base_bdevs_discovered": 2, 00:15:35.638 "num_base_bdevs_operational": 3, 00:15:35.638 "base_bdevs_list": [ 00:15:35.638 { 00:15:35.638 "name": "BaseBdev1", 00:15:35.638 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:35.638 "is_configured": true, 00:15:35.638 "data_offset": 0, 00:15:35.638 "data_size": 65536 00:15:35.638 }, 00:15:35.638 { 00:15:35.638 "name": null, 00:15:35.638 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:35.638 "is_configured": false, 00:15:35.638 "data_offset": 0, 00:15:35.638 "data_size": 65536 00:15:35.638 }, 00:15:35.638 { 00:15:35.638 "name": "BaseBdev3", 00:15:35.638 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:35.638 "is_configured": true, 00:15:35.638 "data_offset": 0, 00:15:35.638 "data_size": 65536 00:15:35.638 } 00:15:35.638 ] 00:15:35.638 }' 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.638 17:50:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.896 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.896 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:35.896 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.896 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.156 [2024-11-20 17:50:03.104989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.156 "name": "Existed_Raid", 00:15:36.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.156 "strip_size_kb": 64, 00:15:36.156 "state": "configuring", 00:15:36.156 "raid_level": "raid5f", 00:15:36.156 "superblock": false, 00:15:36.156 "num_base_bdevs": 3, 00:15:36.156 "num_base_bdevs_discovered": 1, 00:15:36.156 "num_base_bdevs_operational": 3, 00:15:36.156 "base_bdevs_list": [ 00:15:36.156 { 00:15:36.156 "name": null, 00:15:36.156 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:36.156 "is_configured": false, 00:15:36.156 "data_offset": 0, 00:15:36.156 "data_size": 65536 00:15:36.156 }, 00:15:36.156 { 00:15:36.156 "name": null, 00:15:36.156 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:36.156 "is_configured": false, 00:15:36.156 "data_offset": 0, 00:15:36.156 "data_size": 65536 00:15:36.156 }, 00:15:36.156 { 00:15:36.156 "name": "BaseBdev3", 00:15:36.156 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:36.156 "is_configured": true, 00:15:36.156 "data_offset": 0, 00:15:36.156 "data_size": 65536 00:15:36.156 } 00:15:36.156 ] 00:15:36.156 }' 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.156 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.726 [2024-11-20 17:50:03.730206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.726 "name": "Existed_Raid", 00:15:36.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.726 "strip_size_kb": 64, 00:15:36.726 "state": "configuring", 00:15:36.726 "raid_level": "raid5f", 00:15:36.726 "superblock": false, 00:15:36.726 "num_base_bdevs": 3, 00:15:36.726 "num_base_bdevs_discovered": 2, 00:15:36.726 "num_base_bdevs_operational": 3, 00:15:36.726 "base_bdevs_list": [ 00:15:36.726 { 00:15:36.726 "name": null, 00:15:36.726 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:36.726 "is_configured": false, 00:15:36.726 "data_offset": 0, 00:15:36.726 "data_size": 65536 00:15:36.726 }, 00:15:36.726 { 00:15:36.726 "name": "BaseBdev2", 00:15:36.726 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:36.726 "is_configured": true, 00:15:36.726 "data_offset": 0, 00:15:36.726 "data_size": 65536 00:15:36.726 }, 00:15:36.726 { 00:15:36.726 "name": "BaseBdev3", 00:15:36.726 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:36.726 "is_configured": true, 00:15:36.726 "data_offset": 0, 00:15:36.726 "data_size": 65536 00:15:36.726 } 00:15:36.726 ] 00:15:36.726 }' 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.726 17:50:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.986 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.986 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.986 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.986 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:36.986 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed90e1ed-d0eb-4ff9-91fb-56717147d9d3 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 [2024-11-20 17:50:04.275700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:37.247 [2024-11-20 17:50:04.275833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:37.247 [2024-11-20 17:50:04.275860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:37.247 [2024-11-20 17:50:04.276164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:37.247 [2024-11-20 17:50:04.281375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:37.247 [2024-11-20 17:50:04.281428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:37.247 [2024-11-20 17:50:04.281757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.247 NewBaseBdev 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 [ 00:15:37.247 { 00:15:37.247 "name": "NewBaseBdev", 00:15:37.247 "aliases": [ 00:15:37.247 "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3" 00:15:37.247 ], 00:15:37.247 "product_name": "Malloc disk", 00:15:37.247 "block_size": 512, 00:15:37.247 "num_blocks": 65536, 00:15:37.247 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:37.247 "assigned_rate_limits": { 00:15:37.247 "rw_ios_per_sec": 0, 00:15:37.247 "rw_mbytes_per_sec": 0, 00:15:37.247 "r_mbytes_per_sec": 0, 00:15:37.247 "w_mbytes_per_sec": 0 00:15:37.247 }, 00:15:37.247 "claimed": true, 00:15:37.247 "claim_type": "exclusive_write", 00:15:37.247 "zoned": false, 00:15:37.247 "supported_io_types": { 00:15:37.247 "read": true, 00:15:37.247 "write": true, 00:15:37.247 "unmap": true, 00:15:37.247 "flush": true, 00:15:37.247 "reset": true, 00:15:37.247 "nvme_admin": false, 00:15:37.247 "nvme_io": false, 00:15:37.247 "nvme_io_md": false, 00:15:37.247 "write_zeroes": true, 00:15:37.247 "zcopy": true, 00:15:37.247 "get_zone_info": false, 00:15:37.247 "zone_management": false, 00:15:37.247 "zone_append": false, 00:15:37.247 "compare": false, 00:15:37.247 "compare_and_write": false, 00:15:37.247 "abort": true, 00:15:37.247 "seek_hole": false, 00:15:37.247 "seek_data": false, 00:15:37.247 "copy": true, 00:15:37.247 "nvme_iov_md": false 00:15:37.247 }, 00:15:37.247 "memory_domains": [ 00:15:37.247 { 00:15:37.247 "dma_device_id": "system", 00:15:37.247 "dma_device_type": 1 00:15:37.247 }, 00:15:37.247 { 00:15:37.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.247 "dma_device_type": 2 00:15:37.247 } 00:15:37.247 ], 00:15:37.247 "driver_specific": {} 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.247 "name": "Existed_Raid", 00:15:37.247 "uuid": "d690ebad-3455-4965-997e-26002ff80ce6", 00:15:37.247 "strip_size_kb": 64, 00:15:37.247 "state": "online", 00:15:37.247 "raid_level": "raid5f", 00:15:37.247 "superblock": false, 00:15:37.247 "num_base_bdevs": 3, 00:15:37.247 "num_base_bdevs_discovered": 3, 00:15:37.247 "num_base_bdevs_operational": 3, 00:15:37.247 "base_bdevs_list": [ 00:15:37.247 { 00:15:37.247 "name": "NewBaseBdev", 00:15:37.247 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:37.247 "is_configured": true, 00:15:37.247 "data_offset": 0, 00:15:37.247 "data_size": 65536 00:15:37.247 }, 00:15:37.247 { 00:15:37.247 "name": "BaseBdev2", 00:15:37.247 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:37.247 "is_configured": true, 00:15:37.247 "data_offset": 0, 00:15:37.247 "data_size": 65536 00:15:37.247 }, 00:15:37.247 { 00:15:37.247 "name": "BaseBdev3", 00:15:37.247 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:37.247 "is_configured": true, 00:15:37.247 "data_offset": 0, 00:15:37.247 "data_size": 65536 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 }' 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.247 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.842 [2024-11-20 17:50:04.807903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.842 "name": "Existed_Raid", 00:15:37.842 "aliases": [ 00:15:37.842 "d690ebad-3455-4965-997e-26002ff80ce6" 00:15:37.842 ], 00:15:37.842 "product_name": "Raid Volume", 00:15:37.842 "block_size": 512, 00:15:37.842 "num_blocks": 131072, 00:15:37.842 "uuid": "d690ebad-3455-4965-997e-26002ff80ce6", 00:15:37.842 "assigned_rate_limits": { 00:15:37.842 "rw_ios_per_sec": 0, 00:15:37.842 "rw_mbytes_per_sec": 0, 00:15:37.842 "r_mbytes_per_sec": 0, 00:15:37.842 "w_mbytes_per_sec": 0 00:15:37.842 }, 00:15:37.842 "claimed": false, 00:15:37.842 "zoned": false, 00:15:37.842 "supported_io_types": { 00:15:37.842 "read": true, 00:15:37.842 "write": true, 00:15:37.842 "unmap": false, 00:15:37.842 "flush": false, 00:15:37.842 "reset": true, 00:15:37.842 "nvme_admin": false, 00:15:37.842 "nvme_io": false, 00:15:37.842 "nvme_io_md": false, 00:15:37.842 "write_zeroes": true, 00:15:37.842 "zcopy": false, 00:15:37.842 "get_zone_info": false, 00:15:37.842 "zone_management": false, 00:15:37.842 "zone_append": false, 00:15:37.842 "compare": false, 00:15:37.842 "compare_and_write": false, 00:15:37.842 "abort": false, 00:15:37.842 "seek_hole": false, 00:15:37.842 "seek_data": false, 00:15:37.842 "copy": false, 00:15:37.842 "nvme_iov_md": false 00:15:37.842 }, 00:15:37.842 "driver_specific": { 00:15:37.842 "raid": { 00:15:37.842 "uuid": "d690ebad-3455-4965-997e-26002ff80ce6", 00:15:37.842 "strip_size_kb": 64, 00:15:37.842 "state": "online", 00:15:37.842 "raid_level": "raid5f", 00:15:37.842 "superblock": false, 00:15:37.842 "num_base_bdevs": 3, 00:15:37.842 "num_base_bdevs_discovered": 3, 00:15:37.842 "num_base_bdevs_operational": 3, 00:15:37.842 "base_bdevs_list": [ 00:15:37.842 { 00:15:37.842 "name": "NewBaseBdev", 00:15:37.842 "uuid": "ed90e1ed-d0eb-4ff9-91fb-56717147d9d3", 00:15:37.842 "is_configured": true, 00:15:37.842 "data_offset": 0, 00:15:37.842 "data_size": 65536 00:15:37.842 }, 00:15:37.842 { 00:15:37.842 "name": "BaseBdev2", 00:15:37.842 "uuid": "9df50f10-7975-4f3d-a638-379fa0597dbd", 00:15:37.842 "is_configured": true, 00:15:37.842 "data_offset": 0, 00:15:37.842 "data_size": 65536 00:15:37.842 }, 00:15:37.842 { 00:15:37.842 "name": "BaseBdev3", 00:15:37.842 "uuid": "1848334b-7ff9-4adc-be4b-4ef696c62512", 00:15:37.842 "is_configured": true, 00:15:37.842 "data_offset": 0, 00:15:37.842 "data_size": 65536 00:15:37.842 } 00:15:37.842 ] 00:15:37.842 } 00:15:37.842 } 00:15:37.842 }' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:37.842 BaseBdev2 00:15:37.842 BaseBdev3' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.842 17:50:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.141 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.141 [2024-11-20 17:50:05.063228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.141 [2024-11-20 17:50:05.063296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.141 [2024-11-20 17:50:05.063396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.142 [2024-11-20 17:50:05.063707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.142 [2024-11-20 17:50:05.063762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80344 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80344 ']' 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80344 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80344 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80344' 00:15:38.142 killing process with pid 80344 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80344 00:15:38.142 [2024-11-20 17:50:05.117279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.142 17:50:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80344 00:15:38.401 [2024-11-20 17:50:05.438916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:39.780 00:15:39.780 real 0m10.535s 00:15:39.780 user 0m16.391s 00:15:39.780 sys 0m1.989s 00:15:39.780 ************************************ 00:15:39.780 END TEST raid5f_state_function_test 00:15:39.780 ************************************ 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.780 17:50:06 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:39.780 17:50:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:39.780 17:50:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.780 17:50:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.780 ************************************ 00:15:39.780 START TEST raid5f_state_function_test_sb 00:15:39.780 ************************************ 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:39.780 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80989 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80989' 00:15:39.781 Process raid pid: 80989 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80989 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80989 ']' 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.781 17:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.781 [2024-11-20 17:50:06.817289] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:39.781 [2024-11-20 17:50:06.817484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.040 [2024-11-20 17:50:06.995602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.040 [2024-11-20 17:50:07.132357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.299 [2024-11-20 17:50:07.368442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.299 [2024-11-20 17:50:07.368484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.558 [2024-11-20 17:50:07.627774] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.558 [2024-11-20 17:50:07.627923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.558 [2024-11-20 17:50:07.627959] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.558 [2024-11-20 17:50:07.627982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.558 [2024-11-20 17:50:07.627999] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:40.558 [2024-11-20 17:50:07.628036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.558 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.559 "name": "Existed_Raid", 00:15:40.559 "uuid": "1adf44ac-0bc4-41d0-9586-a5f15c22dcf3", 00:15:40.559 "strip_size_kb": 64, 00:15:40.559 "state": "configuring", 00:15:40.559 "raid_level": "raid5f", 00:15:40.559 "superblock": true, 00:15:40.559 "num_base_bdevs": 3, 00:15:40.559 "num_base_bdevs_discovered": 0, 00:15:40.559 "num_base_bdevs_operational": 3, 00:15:40.559 "base_bdevs_list": [ 00:15:40.559 { 00:15:40.559 "name": "BaseBdev1", 00:15:40.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.559 "is_configured": false, 00:15:40.559 "data_offset": 0, 00:15:40.559 "data_size": 0 00:15:40.559 }, 00:15:40.559 { 00:15:40.559 "name": "BaseBdev2", 00:15:40.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.559 "is_configured": false, 00:15:40.559 "data_offset": 0, 00:15:40.559 "data_size": 0 00:15:40.559 }, 00:15:40.559 { 00:15:40.559 "name": "BaseBdev3", 00:15:40.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.559 "is_configured": false, 00:15:40.559 "data_offset": 0, 00:15:40.559 "data_size": 0 00:15:40.559 } 00:15:40.559 ] 00:15:40.559 }' 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.559 17:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.128 [2024-11-20 17:50:08.042965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.128 [2024-11-20 17:50:08.043075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.128 [2024-11-20 17:50:08.054963] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.128 [2024-11-20 17:50:08.055022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.128 [2024-11-20 17:50:08.055032] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.128 [2024-11-20 17:50:08.055042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.128 [2024-11-20 17:50:08.055048] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.128 [2024-11-20 17:50:08.055057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.128 [2024-11-20 17:50:08.109862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.128 BaseBdev1 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.128 [ 00:15:41.128 { 00:15:41.128 "name": "BaseBdev1", 00:15:41.128 "aliases": [ 00:15:41.128 "057685c9-21ba-4f3a-b3ed-5bbab55b3353" 00:15:41.128 ], 00:15:41.128 "product_name": "Malloc disk", 00:15:41.128 "block_size": 512, 00:15:41.128 "num_blocks": 65536, 00:15:41.128 "uuid": "057685c9-21ba-4f3a-b3ed-5bbab55b3353", 00:15:41.128 "assigned_rate_limits": { 00:15:41.128 "rw_ios_per_sec": 0, 00:15:41.128 "rw_mbytes_per_sec": 0, 00:15:41.128 "r_mbytes_per_sec": 0, 00:15:41.128 "w_mbytes_per_sec": 0 00:15:41.128 }, 00:15:41.128 "claimed": true, 00:15:41.128 "claim_type": "exclusive_write", 00:15:41.128 "zoned": false, 00:15:41.128 "supported_io_types": { 00:15:41.128 "read": true, 00:15:41.128 "write": true, 00:15:41.128 "unmap": true, 00:15:41.128 "flush": true, 00:15:41.128 "reset": true, 00:15:41.128 "nvme_admin": false, 00:15:41.128 "nvme_io": false, 00:15:41.128 "nvme_io_md": false, 00:15:41.128 "write_zeroes": true, 00:15:41.128 "zcopy": true, 00:15:41.128 "get_zone_info": false, 00:15:41.128 "zone_management": false, 00:15:41.128 "zone_append": false, 00:15:41.128 "compare": false, 00:15:41.128 "compare_and_write": false, 00:15:41.128 "abort": true, 00:15:41.128 "seek_hole": false, 00:15:41.128 "seek_data": false, 00:15:41.128 "copy": true, 00:15:41.128 "nvme_iov_md": false 00:15:41.128 }, 00:15:41.128 "memory_domains": [ 00:15:41.128 { 00:15:41.128 "dma_device_id": "system", 00:15:41.128 "dma_device_type": 1 00:15:41.128 }, 00:15:41.128 { 00:15:41.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.128 "dma_device_type": 2 00:15:41.128 } 00:15:41.128 ], 00:15:41.128 "driver_specific": {} 00:15:41.128 } 00:15:41.128 ] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.128 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.129 "name": "Existed_Raid", 00:15:41.129 "uuid": "d6782866-4182-4910-af9f-ea27ccc7a0f6", 00:15:41.129 "strip_size_kb": 64, 00:15:41.129 "state": "configuring", 00:15:41.129 "raid_level": "raid5f", 00:15:41.129 "superblock": true, 00:15:41.129 "num_base_bdevs": 3, 00:15:41.129 "num_base_bdevs_discovered": 1, 00:15:41.129 "num_base_bdevs_operational": 3, 00:15:41.129 "base_bdevs_list": [ 00:15:41.129 { 00:15:41.129 "name": "BaseBdev1", 00:15:41.129 "uuid": "057685c9-21ba-4f3a-b3ed-5bbab55b3353", 00:15:41.129 "is_configured": true, 00:15:41.129 "data_offset": 2048, 00:15:41.129 "data_size": 63488 00:15:41.129 }, 00:15:41.129 { 00:15:41.129 "name": "BaseBdev2", 00:15:41.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.129 "is_configured": false, 00:15:41.129 "data_offset": 0, 00:15:41.129 "data_size": 0 00:15:41.129 }, 00:15:41.129 { 00:15:41.129 "name": "BaseBdev3", 00:15:41.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.129 "is_configured": false, 00:15:41.129 "data_offset": 0, 00:15:41.129 "data_size": 0 00:15:41.129 } 00:15:41.129 ] 00:15:41.129 }' 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.129 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.699 [2024-11-20 17:50:08.593041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.699 [2024-11-20 17:50:08.593124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.699 [2024-11-20 17:50:08.605089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.699 [2024-11-20 17:50:08.607049] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.699 [2024-11-20 17:50:08.607119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.699 [2024-11-20 17:50:08.607145] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.699 [2024-11-20 17:50:08.607166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.699 "name": "Existed_Raid", 00:15:41.699 "uuid": "3134fb6a-6bce-4d87-9f6a-484d97952adb", 00:15:41.699 "strip_size_kb": 64, 00:15:41.699 "state": "configuring", 00:15:41.699 "raid_level": "raid5f", 00:15:41.699 "superblock": true, 00:15:41.699 "num_base_bdevs": 3, 00:15:41.699 "num_base_bdevs_discovered": 1, 00:15:41.699 "num_base_bdevs_operational": 3, 00:15:41.699 "base_bdevs_list": [ 00:15:41.699 { 00:15:41.699 "name": "BaseBdev1", 00:15:41.699 "uuid": "057685c9-21ba-4f3a-b3ed-5bbab55b3353", 00:15:41.699 "is_configured": true, 00:15:41.699 "data_offset": 2048, 00:15:41.699 "data_size": 63488 00:15:41.699 }, 00:15:41.699 { 00:15:41.699 "name": "BaseBdev2", 00:15:41.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.699 "is_configured": false, 00:15:41.699 "data_offset": 0, 00:15:41.699 "data_size": 0 00:15:41.699 }, 00:15:41.699 { 00:15:41.699 "name": "BaseBdev3", 00:15:41.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.699 "is_configured": false, 00:15:41.699 "data_offset": 0, 00:15:41.699 "data_size": 0 00:15:41.699 } 00:15:41.699 ] 00:15:41.699 }' 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.699 17:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 [2024-11-20 17:50:09.095165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.959 BaseBdev2 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 [ 00:15:41.959 { 00:15:41.959 "name": "BaseBdev2", 00:15:41.959 "aliases": [ 00:15:41.959 "3b40149e-9545-445f-acbb-3d32430efb8d" 00:15:41.959 ], 00:15:41.959 "product_name": "Malloc disk", 00:15:41.959 "block_size": 512, 00:15:41.959 "num_blocks": 65536, 00:15:41.959 "uuid": "3b40149e-9545-445f-acbb-3d32430efb8d", 00:15:41.959 "assigned_rate_limits": { 00:15:41.959 "rw_ios_per_sec": 0, 00:15:41.959 "rw_mbytes_per_sec": 0, 00:15:41.959 "r_mbytes_per_sec": 0, 00:15:41.959 "w_mbytes_per_sec": 0 00:15:41.960 }, 00:15:41.960 "claimed": true, 00:15:41.960 "claim_type": "exclusive_write", 00:15:41.960 "zoned": false, 00:15:41.960 "supported_io_types": { 00:15:41.960 "read": true, 00:15:41.960 "write": true, 00:15:41.960 "unmap": true, 00:15:41.960 "flush": true, 00:15:41.960 "reset": true, 00:15:41.960 "nvme_admin": false, 00:15:41.960 "nvme_io": false, 00:15:41.960 "nvme_io_md": false, 00:15:41.960 "write_zeroes": true, 00:15:41.960 "zcopy": true, 00:15:41.960 "get_zone_info": false, 00:15:41.960 "zone_management": false, 00:15:41.960 "zone_append": false, 00:15:41.960 "compare": false, 00:15:41.960 "compare_and_write": false, 00:15:41.960 "abort": true, 00:15:41.960 "seek_hole": false, 00:15:41.960 "seek_data": false, 00:15:41.960 "copy": true, 00:15:41.960 "nvme_iov_md": false 00:15:41.960 }, 00:15:41.960 "memory_domains": [ 00:15:41.960 { 00:15:41.960 "dma_device_id": "system", 00:15:41.960 "dma_device_type": 1 00:15:41.960 }, 00:15:41.960 { 00:15:41.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.960 "dma_device_type": 2 00:15:41.960 } 00:15:41.960 ], 00:15:41.960 "driver_specific": {} 00:15:41.960 } 00:15:41.960 ] 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.960 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.231 "name": "Existed_Raid", 00:15:42.231 "uuid": "3134fb6a-6bce-4d87-9f6a-484d97952adb", 00:15:42.231 "strip_size_kb": 64, 00:15:42.231 "state": "configuring", 00:15:42.231 "raid_level": "raid5f", 00:15:42.231 "superblock": true, 00:15:42.231 "num_base_bdevs": 3, 00:15:42.231 "num_base_bdevs_discovered": 2, 00:15:42.231 "num_base_bdevs_operational": 3, 00:15:42.231 "base_bdevs_list": [ 00:15:42.231 { 00:15:42.231 "name": "BaseBdev1", 00:15:42.231 "uuid": "057685c9-21ba-4f3a-b3ed-5bbab55b3353", 00:15:42.231 "is_configured": true, 00:15:42.231 "data_offset": 2048, 00:15:42.231 "data_size": 63488 00:15:42.231 }, 00:15:42.231 { 00:15:42.231 "name": "BaseBdev2", 00:15:42.231 "uuid": "3b40149e-9545-445f-acbb-3d32430efb8d", 00:15:42.231 "is_configured": true, 00:15:42.231 "data_offset": 2048, 00:15:42.231 "data_size": 63488 00:15:42.231 }, 00:15:42.231 { 00:15:42.231 "name": "BaseBdev3", 00:15:42.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.231 "is_configured": false, 00:15:42.231 "data_offset": 0, 00:15:42.231 "data_size": 0 00:15:42.231 } 00:15:42.231 ] 00:15:42.231 }' 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.231 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.492 [2024-11-20 17:50:09.616251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.492 [2024-11-20 17:50:09.616635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:42.492 [2024-11-20 17:50:09.616709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:42.492 [2024-11-20 17:50:09.617046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:42.492 BaseBdev3 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.492 [2024-11-20 17:50:09.622523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:42.492 [2024-11-20 17:50:09.622582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:42.492 [2024-11-20 17:50:09.622784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.492 [ 00:15:42.492 { 00:15:42.492 "name": "BaseBdev3", 00:15:42.492 "aliases": [ 00:15:42.492 "a4d9b4e3-a377-4b10-9738-8803b1d725c3" 00:15:42.492 ], 00:15:42.492 "product_name": "Malloc disk", 00:15:42.492 "block_size": 512, 00:15:42.492 "num_blocks": 65536, 00:15:42.492 "uuid": "a4d9b4e3-a377-4b10-9738-8803b1d725c3", 00:15:42.492 "assigned_rate_limits": { 00:15:42.492 "rw_ios_per_sec": 0, 00:15:42.492 "rw_mbytes_per_sec": 0, 00:15:42.492 "r_mbytes_per_sec": 0, 00:15:42.492 "w_mbytes_per_sec": 0 00:15:42.492 }, 00:15:42.492 "claimed": true, 00:15:42.492 "claim_type": "exclusive_write", 00:15:42.492 "zoned": false, 00:15:42.492 "supported_io_types": { 00:15:42.492 "read": true, 00:15:42.492 "write": true, 00:15:42.492 "unmap": true, 00:15:42.492 "flush": true, 00:15:42.492 "reset": true, 00:15:42.492 "nvme_admin": false, 00:15:42.492 "nvme_io": false, 00:15:42.492 "nvme_io_md": false, 00:15:42.492 "write_zeroes": true, 00:15:42.492 "zcopy": true, 00:15:42.492 "get_zone_info": false, 00:15:42.492 "zone_management": false, 00:15:42.492 "zone_append": false, 00:15:42.492 "compare": false, 00:15:42.492 "compare_and_write": false, 00:15:42.492 "abort": true, 00:15:42.492 "seek_hole": false, 00:15:42.492 "seek_data": false, 00:15:42.492 "copy": true, 00:15:42.492 "nvme_iov_md": false 00:15:42.492 }, 00:15:42.492 "memory_domains": [ 00:15:42.492 { 00:15:42.492 "dma_device_id": "system", 00:15:42.492 "dma_device_type": 1 00:15:42.492 }, 00:15:42.492 { 00:15:42.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.492 "dma_device_type": 2 00:15:42.492 } 00:15:42.492 ], 00:15:42.492 "driver_specific": {} 00:15:42.492 } 00:15:42.492 ] 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.492 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.752 "name": "Existed_Raid", 00:15:42.752 "uuid": "3134fb6a-6bce-4d87-9f6a-484d97952adb", 00:15:42.752 "strip_size_kb": 64, 00:15:42.752 "state": "online", 00:15:42.752 "raid_level": "raid5f", 00:15:42.752 "superblock": true, 00:15:42.752 "num_base_bdevs": 3, 00:15:42.752 "num_base_bdevs_discovered": 3, 00:15:42.752 "num_base_bdevs_operational": 3, 00:15:42.752 "base_bdevs_list": [ 00:15:42.752 { 00:15:42.752 "name": "BaseBdev1", 00:15:42.752 "uuid": "057685c9-21ba-4f3a-b3ed-5bbab55b3353", 00:15:42.752 "is_configured": true, 00:15:42.752 "data_offset": 2048, 00:15:42.752 "data_size": 63488 00:15:42.752 }, 00:15:42.752 { 00:15:42.752 "name": "BaseBdev2", 00:15:42.752 "uuid": "3b40149e-9545-445f-acbb-3d32430efb8d", 00:15:42.752 "is_configured": true, 00:15:42.752 "data_offset": 2048, 00:15:42.752 "data_size": 63488 00:15:42.752 }, 00:15:42.752 { 00:15:42.752 "name": "BaseBdev3", 00:15:42.752 "uuid": "a4d9b4e3-a377-4b10-9738-8803b1d725c3", 00:15:42.752 "is_configured": true, 00:15:42.752 "data_offset": 2048, 00:15:42.752 "data_size": 63488 00:15:42.752 } 00:15:42.752 ] 00:15:42.752 }' 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.752 17:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.012 [2024-11-20 17:50:10.104755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.012 "name": "Existed_Raid", 00:15:43.012 "aliases": [ 00:15:43.012 "3134fb6a-6bce-4d87-9f6a-484d97952adb" 00:15:43.012 ], 00:15:43.012 "product_name": "Raid Volume", 00:15:43.012 "block_size": 512, 00:15:43.012 "num_blocks": 126976, 00:15:43.012 "uuid": "3134fb6a-6bce-4d87-9f6a-484d97952adb", 00:15:43.012 "assigned_rate_limits": { 00:15:43.012 "rw_ios_per_sec": 0, 00:15:43.012 "rw_mbytes_per_sec": 0, 00:15:43.012 "r_mbytes_per_sec": 0, 00:15:43.012 "w_mbytes_per_sec": 0 00:15:43.012 }, 00:15:43.012 "claimed": false, 00:15:43.012 "zoned": false, 00:15:43.012 "supported_io_types": { 00:15:43.012 "read": true, 00:15:43.012 "write": true, 00:15:43.012 "unmap": false, 00:15:43.012 "flush": false, 00:15:43.012 "reset": true, 00:15:43.012 "nvme_admin": false, 00:15:43.012 "nvme_io": false, 00:15:43.012 "nvme_io_md": false, 00:15:43.012 "write_zeroes": true, 00:15:43.012 "zcopy": false, 00:15:43.012 "get_zone_info": false, 00:15:43.012 "zone_management": false, 00:15:43.012 "zone_append": false, 00:15:43.012 "compare": false, 00:15:43.012 "compare_and_write": false, 00:15:43.012 "abort": false, 00:15:43.012 "seek_hole": false, 00:15:43.012 "seek_data": false, 00:15:43.012 "copy": false, 00:15:43.012 "nvme_iov_md": false 00:15:43.012 }, 00:15:43.012 "driver_specific": { 00:15:43.012 "raid": { 00:15:43.012 "uuid": "3134fb6a-6bce-4d87-9f6a-484d97952adb", 00:15:43.012 "strip_size_kb": 64, 00:15:43.012 "state": "online", 00:15:43.012 "raid_level": "raid5f", 00:15:43.012 "superblock": true, 00:15:43.012 "num_base_bdevs": 3, 00:15:43.012 "num_base_bdevs_discovered": 3, 00:15:43.012 "num_base_bdevs_operational": 3, 00:15:43.012 "base_bdevs_list": [ 00:15:43.012 { 00:15:43.012 "name": "BaseBdev1", 00:15:43.012 "uuid": "057685c9-21ba-4f3a-b3ed-5bbab55b3353", 00:15:43.012 "is_configured": true, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 }, 00:15:43.012 { 00:15:43.012 "name": "BaseBdev2", 00:15:43.012 "uuid": "3b40149e-9545-445f-acbb-3d32430efb8d", 00:15:43.012 "is_configured": true, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 }, 00:15:43.012 { 00:15:43.012 "name": "BaseBdev3", 00:15:43.012 "uuid": "a4d9b4e3-a377-4b10-9738-8803b1d725c3", 00:15:43.012 "is_configured": true, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 } 00:15:43.012 ] 00:15:43.012 } 00:15:43.012 } 00:15:43.012 }' 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:43.012 BaseBdev2 00:15:43.012 BaseBdev3' 00:15:43.012 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.272 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.272 [2024-11-20 17:50:10.352252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.533 "name": "Existed_Raid", 00:15:43.533 "uuid": "3134fb6a-6bce-4d87-9f6a-484d97952adb", 00:15:43.533 "strip_size_kb": 64, 00:15:43.533 "state": "online", 00:15:43.533 "raid_level": "raid5f", 00:15:43.533 "superblock": true, 00:15:43.533 "num_base_bdevs": 3, 00:15:43.533 "num_base_bdevs_discovered": 2, 00:15:43.533 "num_base_bdevs_operational": 2, 00:15:43.533 "base_bdevs_list": [ 00:15:43.533 { 00:15:43.533 "name": null, 00:15:43.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.533 "is_configured": false, 00:15:43.533 "data_offset": 0, 00:15:43.533 "data_size": 63488 00:15:43.533 }, 00:15:43.533 { 00:15:43.533 "name": "BaseBdev2", 00:15:43.533 "uuid": "3b40149e-9545-445f-acbb-3d32430efb8d", 00:15:43.533 "is_configured": true, 00:15:43.533 "data_offset": 2048, 00:15:43.533 "data_size": 63488 00:15:43.533 }, 00:15:43.533 { 00:15:43.533 "name": "BaseBdev3", 00:15:43.533 "uuid": "a4d9b4e3-a377-4b10-9738-8803b1d725c3", 00:15:43.533 "is_configured": true, 00:15:43.533 "data_offset": 2048, 00:15:43.533 "data_size": 63488 00:15:43.533 } 00:15:43.533 ] 00:15:43.533 }' 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.533 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.793 17:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.793 [2024-11-20 17:50:10.923696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.793 [2024-11-20 17:50:10.923924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.052 [2024-11-20 17:50:11.023672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.052 [2024-11-20 17:50:11.079564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:44.052 [2024-11-20 17:50:11.079612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:44.052 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.311 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:44.311 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:44.311 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:44.311 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.312 BaseBdev2 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.312 [ 00:15:44.312 { 00:15:44.312 "name": "BaseBdev2", 00:15:44.312 "aliases": [ 00:15:44.312 "15f1f79d-f70e-4957-9f4e-def8ab53bf04" 00:15:44.312 ], 00:15:44.312 "product_name": "Malloc disk", 00:15:44.312 "block_size": 512, 00:15:44.312 "num_blocks": 65536, 00:15:44.312 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:44.312 "assigned_rate_limits": { 00:15:44.312 "rw_ios_per_sec": 0, 00:15:44.312 "rw_mbytes_per_sec": 0, 00:15:44.312 "r_mbytes_per_sec": 0, 00:15:44.312 "w_mbytes_per_sec": 0 00:15:44.312 }, 00:15:44.312 "claimed": false, 00:15:44.312 "zoned": false, 00:15:44.312 "supported_io_types": { 00:15:44.312 "read": true, 00:15:44.312 "write": true, 00:15:44.312 "unmap": true, 00:15:44.312 "flush": true, 00:15:44.312 "reset": true, 00:15:44.312 "nvme_admin": false, 00:15:44.312 "nvme_io": false, 00:15:44.312 "nvme_io_md": false, 00:15:44.312 "write_zeroes": true, 00:15:44.312 "zcopy": true, 00:15:44.312 "get_zone_info": false, 00:15:44.312 "zone_management": false, 00:15:44.312 "zone_append": false, 00:15:44.312 "compare": false, 00:15:44.312 "compare_and_write": false, 00:15:44.312 "abort": true, 00:15:44.312 "seek_hole": false, 00:15:44.312 "seek_data": false, 00:15:44.312 "copy": true, 00:15:44.312 "nvme_iov_md": false 00:15:44.312 }, 00:15:44.312 "memory_domains": [ 00:15:44.312 { 00:15:44.312 "dma_device_id": "system", 00:15:44.312 "dma_device_type": 1 00:15:44.312 }, 00:15:44.312 { 00:15:44.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.312 "dma_device_type": 2 00:15:44.312 } 00:15:44.312 ], 00:15:44.312 "driver_specific": {} 00:15:44.312 } 00:15:44.312 ] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.312 BaseBdev3 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.312 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.312 [ 00:15:44.312 { 00:15:44.312 "name": "BaseBdev3", 00:15:44.312 "aliases": [ 00:15:44.312 "5ddcd808-3365-42b0-9dc5-b9c3603db79e" 00:15:44.312 ], 00:15:44.312 "product_name": "Malloc disk", 00:15:44.312 "block_size": 512, 00:15:44.312 "num_blocks": 65536, 00:15:44.312 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:44.312 "assigned_rate_limits": { 00:15:44.312 "rw_ios_per_sec": 0, 00:15:44.312 "rw_mbytes_per_sec": 0, 00:15:44.312 "r_mbytes_per_sec": 0, 00:15:44.312 "w_mbytes_per_sec": 0 00:15:44.312 }, 00:15:44.312 "claimed": false, 00:15:44.312 "zoned": false, 00:15:44.312 "supported_io_types": { 00:15:44.312 "read": true, 00:15:44.312 "write": true, 00:15:44.312 "unmap": true, 00:15:44.312 "flush": true, 00:15:44.312 "reset": true, 00:15:44.312 "nvme_admin": false, 00:15:44.312 "nvme_io": false, 00:15:44.312 "nvme_io_md": false, 00:15:44.312 "write_zeroes": true, 00:15:44.312 "zcopy": true, 00:15:44.312 "get_zone_info": false, 00:15:44.312 "zone_management": false, 00:15:44.312 "zone_append": false, 00:15:44.312 "compare": false, 00:15:44.312 "compare_and_write": false, 00:15:44.312 "abort": true, 00:15:44.312 "seek_hole": false, 00:15:44.312 "seek_data": false, 00:15:44.312 "copy": true, 00:15:44.312 "nvme_iov_md": false 00:15:44.312 }, 00:15:44.312 "memory_domains": [ 00:15:44.312 { 00:15:44.313 "dma_device_id": "system", 00:15:44.313 "dma_device_type": 1 00:15:44.313 }, 00:15:44.313 { 00:15:44.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.313 "dma_device_type": 2 00:15:44.313 } 00:15:44.313 ], 00:15:44.313 "driver_specific": {} 00:15:44.313 } 00:15:44.313 ] 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.313 [2024-11-20 17:50:11.402596] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.313 [2024-11-20 17:50:11.402699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.313 [2024-11-20 17:50:11.402742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.313 [2024-11-20 17:50:11.404687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.313 "name": "Existed_Raid", 00:15:44.313 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:44.313 "strip_size_kb": 64, 00:15:44.313 "state": "configuring", 00:15:44.313 "raid_level": "raid5f", 00:15:44.313 "superblock": true, 00:15:44.313 "num_base_bdevs": 3, 00:15:44.313 "num_base_bdevs_discovered": 2, 00:15:44.313 "num_base_bdevs_operational": 3, 00:15:44.313 "base_bdevs_list": [ 00:15:44.313 { 00:15:44.313 "name": "BaseBdev1", 00:15:44.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.313 "is_configured": false, 00:15:44.313 "data_offset": 0, 00:15:44.313 "data_size": 0 00:15:44.313 }, 00:15:44.313 { 00:15:44.313 "name": "BaseBdev2", 00:15:44.313 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:44.313 "is_configured": true, 00:15:44.313 "data_offset": 2048, 00:15:44.313 "data_size": 63488 00:15:44.313 }, 00:15:44.313 { 00:15:44.313 "name": "BaseBdev3", 00:15:44.313 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:44.313 "is_configured": true, 00:15:44.313 "data_offset": 2048, 00:15:44.313 "data_size": 63488 00:15:44.313 } 00:15:44.313 ] 00:15:44.313 }' 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.313 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.882 [2024-11-20 17:50:11.813892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.882 "name": "Existed_Raid", 00:15:44.882 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:44.882 "strip_size_kb": 64, 00:15:44.882 "state": "configuring", 00:15:44.882 "raid_level": "raid5f", 00:15:44.882 "superblock": true, 00:15:44.882 "num_base_bdevs": 3, 00:15:44.882 "num_base_bdevs_discovered": 1, 00:15:44.882 "num_base_bdevs_operational": 3, 00:15:44.882 "base_bdevs_list": [ 00:15:44.882 { 00:15:44.882 "name": "BaseBdev1", 00:15:44.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.882 "is_configured": false, 00:15:44.882 "data_offset": 0, 00:15:44.882 "data_size": 0 00:15:44.882 }, 00:15:44.882 { 00:15:44.882 "name": null, 00:15:44.882 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:44.882 "is_configured": false, 00:15:44.882 "data_offset": 0, 00:15:44.882 "data_size": 63488 00:15:44.882 }, 00:15:44.882 { 00:15:44.882 "name": "BaseBdev3", 00:15:44.882 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:44.882 "is_configured": true, 00:15:44.882 "data_offset": 2048, 00:15:44.882 "data_size": 63488 00:15:44.882 } 00:15:44.882 ] 00:15:44.882 }' 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.882 17:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.152 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.424 [2024-11-20 17:50:12.337949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.425 BaseBdev1 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.425 [ 00:15:45.425 { 00:15:45.425 "name": "BaseBdev1", 00:15:45.425 "aliases": [ 00:15:45.425 "3e9729a8-0a93-47c7-977d-bed2189cd293" 00:15:45.425 ], 00:15:45.425 "product_name": "Malloc disk", 00:15:45.425 "block_size": 512, 00:15:45.425 "num_blocks": 65536, 00:15:45.425 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:45.425 "assigned_rate_limits": { 00:15:45.425 "rw_ios_per_sec": 0, 00:15:45.425 "rw_mbytes_per_sec": 0, 00:15:45.425 "r_mbytes_per_sec": 0, 00:15:45.425 "w_mbytes_per_sec": 0 00:15:45.425 }, 00:15:45.425 "claimed": true, 00:15:45.425 "claim_type": "exclusive_write", 00:15:45.425 "zoned": false, 00:15:45.425 "supported_io_types": { 00:15:45.425 "read": true, 00:15:45.425 "write": true, 00:15:45.425 "unmap": true, 00:15:45.425 "flush": true, 00:15:45.425 "reset": true, 00:15:45.425 "nvme_admin": false, 00:15:45.425 "nvme_io": false, 00:15:45.425 "nvme_io_md": false, 00:15:45.425 "write_zeroes": true, 00:15:45.425 "zcopy": true, 00:15:45.425 "get_zone_info": false, 00:15:45.425 "zone_management": false, 00:15:45.425 "zone_append": false, 00:15:45.425 "compare": false, 00:15:45.425 "compare_and_write": false, 00:15:45.425 "abort": true, 00:15:45.425 "seek_hole": false, 00:15:45.425 "seek_data": false, 00:15:45.425 "copy": true, 00:15:45.425 "nvme_iov_md": false 00:15:45.425 }, 00:15:45.425 "memory_domains": [ 00:15:45.425 { 00:15:45.425 "dma_device_id": "system", 00:15:45.425 "dma_device_type": 1 00:15:45.425 }, 00:15:45.425 { 00:15:45.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.425 "dma_device_type": 2 00:15:45.425 } 00:15:45.425 ], 00:15:45.425 "driver_specific": {} 00:15:45.425 } 00:15:45.425 ] 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.425 "name": "Existed_Raid", 00:15:45.425 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:45.425 "strip_size_kb": 64, 00:15:45.425 "state": "configuring", 00:15:45.425 "raid_level": "raid5f", 00:15:45.425 "superblock": true, 00:15:45.425 "num_base_bdevs": 3, 00:15:45.425 "num_base_bdevs_discovered": 2, 00:15:45.425 "num_base_bdevs_operational": 3, 00:15:45.425 "base_bdevs_list": [ 00:15:45.425 { 00:15:45.425 "name": "BaseBdev1", 00:15:45.425 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:45.425 "is_configured": true, 00:15:45.425 "data_offset": 2048, 00:15:45.425 "data_size": 63488 00:15:45.425 }, 00:15:45.425 { 00:15:45.425 "name": null, 00:15:45.425 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:45.425 "is_configured": false, 00:15:45.425 "data_offset": 0, 00:15:45.425 "data_size": 63488 00:15:45.425 }, 00:15:45.425 { 00:15:45.425 "name": "BaseBdev3", 00:15:45.425 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:45.425 "is_configured": true, 00:15:45.425 "data_offset": 2048, 00:15:45.425 "data_size": 63488 00:15:45.425 } 00:15:45.425 ] 00:15:45.425 }' 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.425 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.686 [2024-11-20 17:50:12.793169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.686 "name": "Existed_Raid", 00:15:45.686 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:45.686 "strip_size_kb": 64, 00:15:45.686 "state": "configuring", 00:15:45.686 "raid_level": "raid5f", 00:15:45.686 "superblock": true, 00:15:45.686 "num_base_bdevs": 3, 00:15:45.686 "num_base_bdevs_discovered": 1, 00:15:45.686 "num_base_bdevs_operational": 3, 00:15:45.686 "base_bdevs_list": [ 00:15:45.686 { 00:15:45.686 "name": "BaseBdev1", 00:15:45.686 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:45.686 "is_configured": true, 00:15:45.686 "data_offset": 2048, 00:15:45.686 "data_size": 63488 00:15:45.686 }, 00:15:45.686 { 00:15:45.686 "name": null, 00:15:45.686 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:45.686 "is_configured": false, 00:15:45.686 "data_offset": 0, 00:15:45.686 "data_size": 63488 00:15:45.686 }, 00:15:45.686 { 00:15:45.686 "name": null, 00:15:45.686 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:45.686 "is_configured": false, 00:15:45.686 "data_offset": 0, 00:15:45.686 "data_size": 63488 00:15:45.686 } 00:15:45.686 ] 00:15:45.686 }' 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.686 17:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.256 [2024-11-20 17:50:13.288680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.256 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.257 "name": "Existed_Raid", 00:15:46.257 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:46.257 "strip_size_kb": 64, 00:15:46.257 "state": "configuring", 00:15:46.257 "raid_level": "raid5f", 00:15:46.257 "superblock": true, 00:15:46.257 "num_base_bdevs": 3, 00:15:46.257 "num_base_bdevs_discovered": 2, 00:15:46.257 "num_base_bdevs_operational": 3, 00:15:46.257 "base_bdevs_list": [ 00:15:46.257 { 00:15:46.257 "name": "BaseBdev1", 00:15:46.257 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:46.257 "is_configured": true, 00:15:46.257 "data_offset": 2048, 00:15:46.257 "data_size": 63488 00:15:46.257 }, 00:15:46.257 { 00:15:46.257 "name": null, 00:15:46.257 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:46.257 "is_configured": false, 00:15:46.257 "data_offset": 0, 00:15:46.257 "data_size": 63488 00:15:46.257 }, 00:15:46.257 { 00:15:46.257 "name": "BaseBdev3", 00:15:46.257 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:46.257 "is_configured": true, 00:15:46.257 "data_offset": 2048, 00:15:46.257 "data_size": 63488 00:15:46.257 } 00:15:46.257 ] 00:15:46.257 }' 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.257 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.825 [2024-11-20 17:50:13.823817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.825 "name": "Existed_Raid", 00:15:46.825 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:46.825 "strip_size_kb": 64, 00:15:46.825 "state": "configuring", 00:15:46.825 "raid_level": "raid5f", 00:15:46.825 "superblock": true, 00:15:46.825 "num_base_bdevs": 3, 00:15:46.825 "num_base_bdevs_discovered": 1, 00:15:46.825 "num_base_bdevs_operational": 3, 00:15:46.825 "base_bdevs_list": [ 00:15:46.825 { 00:15:46.825 "name": null, 00:15:46.825 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:46.825 "is_configured": false, 00:15:46.825 "data_offset": 0, 00:15:46.825 "data_size": 63488 00:15:46.825 }, 00:15:46.825 { 00:15:46.825 "name": null, 00:15:46.825 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:46.825 "is_configured": false, 00:15:46.825 "data_offset": 0, 00:15:46.825 "data_size": 63488 00:15:46.825 }, 00:15:46.825 { 00:15:46.825 "name": "BaseBdev3", 00:15:46.825 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:46.825 "is_configured": true, 00:15:46.825 "data_offset": 2048, 00:15:46.825 "data_size": 63488 00:15:46.825 } 00:15:46.825 ] 00:15:46.825 }' 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.825 17:50:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.392 [2024-11-20 17:50:14.422848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.392 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.392 "name": "Existed_Raid", 00:15:47.392 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:47.392 "strip_size_kb": 64, 00:15:47.392 "state": "configuring", 00:15:47.392 "raid_level": "raid5f", 00:15:47.392 "superblock": true, 00:15:47.392 "num_base_bdevs": 3, 00:15:47.392 "num_base_bdevs_discovered": 2, 00:15:47.392 "num_base_bdevs_operational": 3, 00:15:47.392 "base_bdevs_list": [ 00:15:47.392 { 00:15:47.392 "name": null, 00:15:47.392 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:47.392 "is_configured": false, 00:15:47.392 "data_offset": 0, 00:15:47.392 "data_size": 63488 00:15:47.392 }, 00:15:47.392 { 00:15:47.392 "name": "BaseBdev2", 00:15:47.392 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:47.392 "is_configured": true, 00:15:47.392 "data_offset": 2048, 00:15:47.392 "data_size": 63488 00:15:47.392 }, 00:15:47.392 { 00:15:47.392 "name": "BaseBdev3", 00:15:47.392 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:47.392 "is_configured": true, 00:15:47.393 "data_offset": 2048, 00:15:47.393 "data_size": 63488 00:15:47.393 } 00:15:47.393 ] 00:15:47.393 }' 00:15:47.393 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.393 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.652 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.652 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.652 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.652 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3e9729a8-0a93-47c7-977d-bed2189cd293 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 [2024-11-20 17:50:14.943075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:47.912 NewBaseBdev 00:15:47.912 [2024-11-20 17:50:14.943425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:47.912 [2024-11-20 17:50:14.943447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:47.912 [2024-11-20 17:50:14.943718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 [2024-11-20 17:50:14.949038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:47.912 [2024-11-20 17:50:14.949098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:47.912 [2024-11-20 17:50:14.949305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 [ 00:15:47.912 { 00:15:47.912 "name": "NewBaseBdev", 00:15:47.912 "aliases": [ 00:15:47.912 "3e9729a8-0a93-47c7-977d-bed2189cd293" 00:15:47.912 ], 00:15:47.913 "product_name": "Malloc disk", 00:15:47.913 "block_size": 512, 00:15:47.913 "num_blocks": 65536, 00:15:47.913 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:47.913 "assigned_rate_limits": { 00:15:47.913 "rw_ios_per_sec": 0, 00:15:47.913 "rw_mbytes_per_sec": 0, 00:15:47.913 "r_mbytes_per_sec": 0, 00:15:47.913 "w_mbytes_per_sec": 0 00:15:47.913 }, 00:15:47.913 "claimed": true, 00:15:47.913 "claim_type": "exclusive_write", 00:15:47.913 "zoned": false, 00:15:47.913 "supported_io_types": { 00:15:47.913 "read": true, 00:15:47.913 "write": true, 00:15:47.913 "unmap": true, 00:15:47.913 "flush": true, 00:15:47.913 "reset": true, 00:15:47.913 "nvme_admin": false, 00:15:47.913 "nvme_io": false, 00:15:47.913 "nvme_io_md": false, 00:15:47.913 "write_zeroes": true, 00:15:47.913 "zcopy": true, 00:15:47.913 "get_zone_info": false, 00:15:47.913 "zone_management": false, 00:15:47.913 "zone_append": false, 00:15:47.913 "compare": false, 00:15:47.913 "compare_and_write": false, 00:15:47.913 "abort": true, 00:15:47.913 "seek_hole": false, 00:15:47.913 "seek_data": false, 00:15:47.913 "copy": true, 00:15:47.913 "nvme_iov_md": false 00:15:47.913 }, 00:15:47.913 "memory_domains": [ 00:15:47.913 { 00:15:47.913 "dma_device_id": "system", 00:15:47.913 "dma_device_type": 1 00:15:47.913 }, 00:15:47.913 { 00:15:47.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.913 "dma_device_type": 2 00:15:47.913 } 00:15:47.913 ], 00:15:47.913 "driver_specific": {} 00:15:47.913 } 00:15:47.913 ] 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.913 17:50:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.913 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.913 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.913 "name": "Existed_Raid", 00:15:47.913 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:47.913 "strip_size_kb": 64, 00:15:47.913 "state": "online", 00:15:47.913 "raid_level": "raid5f", 00:15:47.913 "superblock": true, 00:15:47.913 "num_base_bdevs": 3, 00:15:47.913 "num_base_bdevs_discovered": 3, 00:15:47.913 "num_base_bdevs_operational": 3, 00:15:47.913 "base_bdevs_list": [ 00:15:47.913 { 00:15:47.913 "name": "NewBaseBdev", 00:15:47.913 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:47.913 "is_configured": true, 00:15:47.913 "data_offset": 2048, 00:15:47.913 "data_size": 63488 00:15:47.913 }, 00:15:47.913 { 00:15:47.913 "name": "BaseBdev2", 00:15:47.913 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:47.913 "is_configured": true, 00:15:47.913 "data_offset": 2048, 00:15:47.913 "data_size": 63488 00:15:47.913 }, 00:15:47.913 { 00:15:47.913 "name": "BaseBdev3", 00:15:47.913 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:47.913 "is_configured": true, 00:15:47.913 "data_offset": 2048, 00:15:47.913 "data_size": 63488 00:15:47.913 } 00:15:47.913 ] 00:15:47.913 }' 00:15:47.913 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.913 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.483 [2024-11-20 17:50:15.427539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.483 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.483 "name": "Existed_Raid", 00:15:48.483 "aliases": [ 00:15:48.483 "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac" 00:15:48.483 ], 00:15:48.483 "product_name": "Raid Volume", 00:15:48.483 "block_size": 512, 00:15:48.483 "num_blocks": 126976, 00:15:48.483 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:48.483 "assigned_rate_limits": { 00:15:48.483 "rw_ios_per_sec": 0, 00:15:48.483 "rw_mbytes_per_sec": 0, 00:15:48.483 "r_mbytes_per_sec": 0, 00:15:48.483 "w_mbytes_per_sec": 0 00:15:48.483 }, 00:15:48.483 "claimed": false, 00:15:48.483 "zoned": false, 00:15:48.483 "supported_io_types": { 00:15:48.483 "read": true, 00:15:48.483 "write": true, 00:15:48.483 "unmap": false, 00:15:48.483 "flush": false, 00:15:48.483 "reset": true, 00:15:48.483 "nvme_admin": false, 00:15:48.483 "nvme_io": false, 00:15:48.483 "nvme_io_md": false, 00:15:48.483 "write_zeroes": true, 00:15:48.483 "zcopy": false, 00:15:48.483 "get_zone_info": false, 00:15:48.483 "zone_management": false, 00:15:48.483 "zone_append": false, 00:15:48.483 "compare": false, 00:15:48.483 "compare_and_write": false, 00:15:48.483 "abort": false, 00:15:48.483 "seek_hole": false, 00:15:48.483 "seek_data": false, 00:15:48.483 "copy": false, 00:15:48.483 "nvme_iov_md": false 00:15:48.483 }, 00:15:48.483 "driver_specific": { 00:15:48.483 "raid": { 00:15:48.483 "uuid": "ca0f9a60-f888-4a8c-91ef-032b4e8c1aac", 00:15:48.483 "strip_size_kb": 64, 00:15:48.483 "state": "online", 00:15:48.483 "raid_level": "raid5f", 00:15:48.483 "superblock": true, 00:15:48.484 "num_base_bdevs": 3, 00:15:48.484 "num_base_bdevs_discovered": 3, 00:15:48.484 "num_base_bdevs_operational": 3, 00:15:48.484 "base_bdevs_list": [ 00:15:48.484 { 00:15:48.484 "name": "NewBaseBdev", 00:15:48.484 "uuid": "3e9729a8-0a93-47c7-977d-bed2189cd293", 00:15:48.484 "is_configured": true, 00:15:48.484 "data_offset": 2048, 00:15:48.484 "data_size": 63488 00:15:48.484 }, 00:15:48.484 { 00:15:48.484 "name": "BaseBdev2", 00:15:48.484 "uuid": "15f1f79d-f70e-4957-9f4e-def8ab53bf04", 00:15:48.484 "is_configured": true, 00:15:48.484 "data_offset": 2048, 00:15:48.484 "data_size": 63488 00:15:48.484 }, 00:15:48.484 { 00:15:48.484 "name": "BaseBdev3", 00:15:48.484 "uuid": "5ddcd808-3365-42b0-9dc5-b9c3603db79e", 00:15:48.484 "is_configured": true, 00:15:48.484 "data_offset": 2048, 00:15:48.484 "data_size": 63488 00:15:48.484 } 00:15:48.484 ] 00:15:48.484 } 00:15:48.484 } 00:15:48.484 }' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:48.484 BaseBdev2 00:15:48.484 BaseBdev3' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.484 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.484 [2024-11-20 17:50:15.655052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.484 [2024-11-20 17:50:15.655112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.484 [2024-11-20 17:50:15.655187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.484 [2024-11-20 17:50:15.655473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.484 [2024-11-20 17:50:15.655486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80989 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80989 ']' 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80989 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80989 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.744 killing process with pid 80989 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80989' 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80989 00:15:48.744 [2024-11-20 17:50:15.695627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.744 17:50:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80989 00:15:49.003 [2024-11-20 17:50:16.009692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.383 17:50:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:50.383 00:15:50.383 real 0m10.471s 00:15:50.383 user 0m16.354s 00:15:50.383 sys 0m1.980s 00:15:50.383 ************************************ 00:15:50.383 END TEST raid5f_state_function_test_sb 00:15:50.383 17:50:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.383 17:50:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.383 ************************************ 00:15:50.383 17:50:17 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:50.383 17:50:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:50.383 17:50:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.383 17:50:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.383 ************************************ 00:15:50.383 START TEST raid5f_superblock_test 00:15:50.383 ************************************ 00:15:50.383 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:50.383 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:50.383 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81609 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81609 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81609 ']' 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.384 17:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.384 [2024-11-20 17:50:17.344591] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:50.384 [2024-11-20 17:50:17.344784] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81609 ] 00:15:50.384 [2024-11-20 17:50:17.517088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.643 [2024-11-20 17:50:17.646377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.903 [2024-11-20 17:50:17.875612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.903 [2024-11-20 17:50:17.875675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.163 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.163 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:51.163 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:51.163 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:51.163 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:51.163 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 malloc1 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 [2024-11-20 17:50:18.220664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.164 [2024-11-20 17:50:18.220811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.164 [2024-11-20 17:50:18.220851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:51.164 [2024-11-20 17:50:18.220879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.164 [2024-11-20 17:50:18.223259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.164 [2024-11-20 17:50:18.223331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.164 pt1 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 malloc2 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 [2024-11-20 17:50:18.283897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.164 [2024-11-20 17:50:18.284005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.164 [2024-11-20 17:50:18.284051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:51.164 [2024-11-20 17:50:18.284061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.164 [2024-11-20 17:50:18.286381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.164 [2024-11-20 17:50:18.286455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.164 pt2 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.164 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.424 malloc3 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.424 [2024-11-20 17:50:18.365262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.424 [2024-11-20 17:50:18.365359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.424 [2024-11-20 17:50:18.365398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:51.424 [2024-11-20 17:50:18.365426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.424 [2024-11-20 17:50:18.367735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.424 [2024-11-20 17:50:18.367800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.424 pt3 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.424 [2024-11-20 17:50:18.377306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.424 [2024-11-20 17:50:18.379394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.424 [2024-11-20 17:50:18.379499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.424 [2024-11-20 17:50:18.379691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:51.424 [2024-11-20 17:50:18.379746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.424 [2024-11-20 17:50:18.379991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:51.424 [2024-11-20 17:50:18.385396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:51.424 [2024-11-20 17:50:18.385446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:51.424 [2024-11-20 17:50:18.385667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.424 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.424 "name": "raid_bdev1", 00:15:51.424 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:51.424 "strip_size_kb": 64, 00:15:51.424 "state": "online", 00:15:51.424 "raid_level": "raid5f", 00:15:51.424 "superblock": true, 00:15:51.425 "num_base_bdevs": 3, 00:15:51.425 "num_base_bdevs_discovered": 3, 00:15:51.425 "num_base_bdevs_operational": 3, 00:15:51.425 "base_bdevs_list": [ 00:15:51.425 { 00:15:51.425 "name": "pt1", 00:15:51.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.425 "is_configured": true, 00:15:51.425 "data_offset": 2048, 00:15:51.425 "data_size": 63488 00:15:51.425 }, 00:15:51.425 { 00:15:51.425 "name": "pt2", 00:15:51.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.425 "is_configured": true, 00:15:51.425 "data_offset": 2048, 00:15:51.425 "data_size": 63488 00:15:51.425 }, 00:15:51.425 { 00:15:51.425 "name": "pt3", 00:15:51.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.425 "is_configured": true, 00:15:51.425 "data_offset": 2048, 00:15:51.425 "data_size": 63488 00:15:51.425 } 00:15:51.425 ] 00:15:51.425 }' 00:15:51.425 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.425 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.685 [2024-11-20 17:50:18.820211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.685 "name": "raid_bdev1", 00:15:51.685 "aliases": [ 00:15:51.685 "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af" 00:15:51.685 ], 00:15:51.685 "product_name": "Raid Volume", 00:15:51.685 "block_size": 512, 00:15:51.685 "num_blocks": 126976, 00:15:51.685 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:51.685 "assigned_rate_limits": { 00:15:51.685 "rw_ios_per_sec": 0, 00:15:51.685 "rw_mbytes_per_sec": 0, 00:15:51.685 "r_mbytes_per_sec": 0, 00:15:51.685 "w_mbytes_per_sec": 0 00:15:51.685 }, 00:15:51.685 "claimed": false, 00:15:51.685 "zoned": false, 00:15:51.685 "supported_io_types": { 00:15:51.685 "read": true, 00:15:51.685 "write": true, 00:15:51.685 "unmap": false, 00:15:51.685 "flush": false, 00:15:51.685 "reset": true, 00:15:51.685 "nvme_admin": false, 00:15:51.685 "nvme_io": false, 00:15:51.685 "nvme_io_md": false, 00:15:51.685 "write_zeroes": true, 00:15:51.685 "zcopy": false, 00:15:51.685 "get_zone_info": false, 00:15:51.685 "zone_management": false, 00:15:51.685 "zone_append": false, 00:15:51.685 "compare": false, 00:15:51.685 "compare_and_write": false, 00:15:51.685 "abort": false, 00:15:51.685 "seek_hole": false, 00:15:51.685 "seek_data": false, 00:15:51.685 "copy": false, 00:15:51.685 "nvme_iov_md": false 00:15:51.685 }, 00:15:51.685 "driver_specific": { 00:15:51.685 "raid": { 00:15:51.685 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:51.685 "strip_size_kb": 64, 00:15:51.685 "state": "online", 00:15:51.685 "raid_level": "raid5f", 00:15:51.685 "superblock": true, 00:15:51.685 "num_base_bdevs": 3, 00:15:51.685 "num_base_bdevs_discovered": 3, 00:15:51.685 "num_base_bdevs_operational": 3, 00:15:51.685 "base_bdevs_list": [ 00:15:51.685 { 00:15:51.685 "name": "pt1", 00:15:51.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.685 "is_configured": true, 00:15:51.685 "data_offset": 2048, 00:15:51.685 "data_size": 63488 00:15:51.685 }, 00:15:51.685 { 00:15:51.685 "name": "pt2", 00:15:51.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.685 "is_configured": true, 00:15:51.685 "data_offset": 2048, 00:15:51.685 "data_size": 63488 00:15:51.685 }, 00:15:51.685 { 00:15:51.685 "name": "pt3", 00:15:51.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.685 "is_configured": true, 00:15:51.685 "data_offset": 2048, 00:15:51.685 "data_size": 63488 00:15:51.685 } 00:15:51.685 ] 00:15:51.685 } 00:15:51.685 } 00:15:51.685 }' 00:15:51.685 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:51.945 pt2 00:15:51.945 pt3' 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.945 17:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.945 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:51.946 [2024-11-20 17:50:19.111597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af ']' 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.214 [2024-11-20 17:50:19.139392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.214 [2024-11-20 17:50:19.139458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.214 [2024-11-20 17:50:19.139547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.214 [2024-11-20 17:50:19.139645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.214 [2024-11-20 17:50:19.139690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:52.214 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.215 [2024-11-20 17:50:19.291179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:52.215 [2024-11-20 17:50:19.293259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:52.215 [2024-11-20 17:50:19.293349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:52.215 [2024-11-20 17:50:19.293419] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:52.215 [2024-11-20 17:50:19.293502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:52.215 [2024-11-20 17:50:19.293544] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:52.215 [2024-11-20 17:50:19.293612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.215 [2024-11-20 17:50:19.293632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:52.215 request: 00:15:52.215 { 00:15:52.215 "name": "raid_bdev1", 00:15:52.215 "raid_level": "raid5f", 00:15:52.215 "base_bdevs": [ 00:15:52.215 "malloc1", 00:15:52.215 "malloc2", 00:15:52.215 "malloc3" 00:15:52.215 ], 00:15:52.215 "strip_size_kb": 64, 00:15:52.215 "superblock": false, 00:15:52.215 "method": "bdev_raid_create", 00:15:52.215 "req_id": 1 00:15:52.215 } 00:15:52.215 Got JSON-RPC error response 00:15:52.215 response: 00:15:52.215 { 00:15:52.215 "code": -17, 00:15:52.215 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:52.215 } 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.215 [2024-11-20 17:50:19.355013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:52.215 [2024-11-20 17:50:19.355104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.215 [2024-11-20 17:50:19.355136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:52.215 [2024-11-20 17:50:19.355159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.215 [2024-11-20 17:50:19.357453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.215 [2024-11-20 17:50:19.357517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:52.215 [2024-11-20 17:50:19.357589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:52.215 [2024-11-20 17:50:19.357644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.215 pt1 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.215 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.475 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.475 "name": "raid_bdev1", 00:15:52.475 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:52.475 "strip_size_kb": 64, 00:15:52.475 "state": "configuring", 00:15:52.475 "raid_level": "raid5f", 00:15:52.475 "superblock": true, 00:15:52.475 "num_base_bdevs": 3, 00:15:52.475 "num_base_bdevs_discovered": 1, 00:15:52.475 "num_base_bdevs_operational": 3, 00:15:52.475 "base_bdevs_list": [ 00:15:52.475 { 00:15:52.475 "name": "pt1", 00:15:52.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.475 "is_configured": true, 00:15:52.475 "data_offset": 2048, 00:15:52.475 "data_size": 63488 00:15:52.475 }, 00:15:52.475 { 00:15:52.475 "name": null, 00:15:52.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.475 "is_configured": false, 00:15:52.475 "data_offset": 2048, 00:15:52.475 "data_size": 63488 00:15:52.475 }, 00:15:52.475 { 00:15:52.475 "name": null, 00:15:52.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.475 "is_configured": false, 00:15:52.475 "data_offset": 2048, 00:15:52.475 "data_size": 63488 00:15:52.475 } 00:15:52.475 ] 00:15:52.475 }' 00:15:52.475 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.475 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.735 [2024-11-20 17:50:19.826214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.735 [2024-11-20 17:50:19.826319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.735 [2024-11-20 17:50:19.826359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:52.735 [2024-11-20 17:50:19.826387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.735 [2024-11-20 17:50:19.826843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.735 [2024-11-20 17:50:19.826902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.735 [2024-11-20 17:50:19.827023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.735 [2024-11-20 17:50:19.827081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.735 pt2 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.735 [2024-11-20 17:50:19.838188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.735 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.735 "name": "raid_bdev1", 00:15:52.735 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:52.735 "strip_size_kb": 64, 00:15:52.735 "state": "configuring", 00:15:52.735 "raid_level": "raid5f", 00:15:52.735 "superblock": true, 00:15:52.735 "num_base_bdevs": 3, 00:15:52.735 "num_base_bdevs_discovered": 1, 00:15:52.735 "num_base_bdevs_operational": 3, 00:15:52.735 "base_bdevs_list": [ 00:15:52.735 { 00:15:52.735 "name": "pt1", 00:15:52.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.735 "is_configured": true, 00:15:52.735 "data_offset": 2048, 00:15:52.735 "data_size": 63488 00:15:52.735 }, 00:15:52.735 { 00:15:52.735 "name": null, 00:15:52.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.735 "is_configured": false, 00:15:52.735 "data_offset": 0, 00:15:52.735 "data_size": 63488 00:15:52.735 }, 00:15:52.735 { 00:15:52.736 "name": null, 00:15:52.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.736 "is_configured": false, 00:15:52.736 "data_offset": 2048, 00:15:52.736 "data_size": 63488 00:15:52.736 } 00:15:52.736 ] 00:15:52.736 }' 00:15:52.736 17:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.736 17:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.305 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.306 [2024-11-20 17:50:20.241551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.306 [2024-11-20 17:50:20.241650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.306 [2024-11-20 17:50:20.241683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:53.306 [2024-11-20 17:50:20.241712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.306 [2024-11-20 17:50:20.242179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.306 [2024-11-20 17:50:20.242236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.306 [2024-11-20 17:50:20.242328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:53.306 [2024-11-20 17:50:20.242379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.306 pt2 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.306 [2024-11-20 17:50:20.253531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:53.306 [2024-11-20 17:50:20.253614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.306 [2024-11-20 17:50:20.253642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:53.306 [2024-11-20 17:50:20.253671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.306 [2024-11-20 17:50:20.254069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.306 [2024-11-20 17:50:20.254128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:53.306 [2024-11-20 17:50:20.254212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:53.306 [2024-11-20 17:50:20.254258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:53.306 [2024-11-20 17:50:20.254400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:53.306 [2024-11-20 17:50:20.254443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:53.306 [2024-11-20 17:50:20.254701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:53.306 [2024-11-20 17:50:20.259543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:53.306 [2024-11-20 17:50:20.259593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:53.306 [2024-11-20 17:50:20.259809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.306 pt3 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.306 "name": "raid_bdev1", 00:15:53.306 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:53.306 "strip_size_kb": 64, 00:15:53.306 "state": "online", 00:15:53.306 "raid_level": "raid5f", 00:15:53.306 "superblock": true, 00:15:53.306 "num_base_bdevs": 3, 00:15:53.306 "num_base_bdevs_discovered": 3, 00:15:53.306 "num_base_bdevs_operational": 3, 00:15:53.306 "base_bdevs_list": [ 00:15:53.306 { 00:15:53.306 "name": "pt1", 00:15:53.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 2048, 00:15:53.306 "data_size": 63488 00:15:53.306 }, 00:15:53.306 { 00:15:53.306 "name": "pt2", 00:15:53.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 2048, 00:15:53.306 "data_size": 63488 00:15:53.306 }, 00:15:53.306 { 00:15:53.306 "name": "pt3", 00:15:53.306 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 2048, 00:15:53.306 "data_size": 63488 00:15:53.306 } 00:15:53.306 ] 00:15:53.306 }' 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.306 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.567 [2024-11-20 17:50:20.694423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.567 "name": "raid_bdev1", 00:15:53.567 "aliases": [ 00:15:53.567 "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af" 00:15:53.567 ], 00:15:53.567 "product_name": "Raid Volume", 00:15:53.567 "block_size": 512, 00:15:53.567 "num_blocks": 126976, 00:15:53.567 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:53.567 "assigned_rate_limits": { 00:15:53.567 "rw_ios_per_sec": 0, 00:15:53.567 "rw_mbytes_per_sec": 0, 00:15:53.567 "r_mbytes_per_sec": 0, 00:15:53.567 "w_mbytes_per_sec": 0 00:15:53.567 }, 00:15:53.567 "claimed": false, 00:15:53.567 "zoned": false, 00:15:53.567 "supported_io_types": { 00:15:53.567 "read": true, 00:15:53.567 "write": true, 00:15:53.567 "unmap": false, 00:15:53.567 "flush": false, 00:15:53.567 "reset": true, 00:15:53.567 "nvme_admin": false, 00:15:53.567 "nvme_io": false, 00:15:53.567 "nvme_io_md": false, 00:15:53.567 "write_zeroes": true, 00:15:53.567 "zcopy": false, 00:15:53.567 "get_zone_info": false, 00:15:53.567 "zone_management": false, 00:15:53.567 "zone_append": false, 00:15:53.567 "compare": false, 00:15:53.567 "compare_and_write": false, 00:15:53.567 "abort": false, 00:15:53.567 "seek_hole": false, 00:15:53.567 "seek_data": false, 00:15:53.567 "copy": false, 00:15:53.567 "nvme_iov_md": false 00:15:53.567 }, 00:15:53.567 "driver_specific": { 00:15:53.567 "raid": { 00:15:53.567 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:53.567 "strip_size_kb": 64, 00:15:53.567 "state": "online", 00:15:53.567 "raid_level": "raid5f", 00:15:53.567 "superblock": true, 00:15:53.567 "num_base_bdevs": 3, 00:15:53.567 "num_base_bdevs_discovered": 3, 00:15:53.567 "num_base_bdevs_operational": 3, 00:15:53.567 "base_bdevs_list": [ 00:15:53.567 { 00:15:53.567 "name": "pt1", 00:15:53.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.567 "is_configured": true, 00:15:53.567 "data_offset": 2048, 00:15:53.567 "data_size": 63488 00:15:53.567 }, 00:15:53.567 { 00:15:53.567 "name": "pt2", 00:15:53.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.567 "is_configured": true, 00:15:53.567 "data_offset": 2048, 00:15:53.567 "data_size": 63488 00:15:53.567 }, 00:15:53.567 { 00:15:53.567 "name": "pt3", 00:15:53.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.567 "is_configured": true, 00:15:53.567 "data_offset": 2048, 00:15:53.567 "data_size": 63488 00:15:53.567 } 00:15:53.567 ] 00:15:53.567 } 00:15:53.567 } 00:15:53.567 }' 00:15:53.567 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:53.827 pt2 00:15:53.827 pt3' 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.827 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:53.828 17:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 [2024-11-20 17:50:21.001851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af '!=' 92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af ']' 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.088 [2024-11-20 17:50:21.049654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.088 "name": "raid_bdev1", 00:15:54.088 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:54.088 "strip_size_kb": 64, 00:15:54.088 "state": "online", 00:15:54.088 "raid_level": "raid5f", 00:15:54.088 "superblock": true, 00:15:54.088 "num_base_bdevs": 3, 00:15:54.088 "num_base_bdevs_discovered": 2, 00:15:54.088 "num_base_bdevs_operational": 2, 00:15:54.088 "base_bdevs_list": [ 00:15:54.088 { 00:15:54.088 "name": null, 00:15:54.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.088 "is_configured": false, 00:15:54.088 "data_offset": 0, 00:15:54.088 "data_size": 63488 00:15:54.088 }, 00:15:54.088 { 00:15:54.088 "name": "pt2", 00:15:54.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.088 "is_configured": true, 00:15:54.088 "data_offset": 2048, 00:15:54.088 "data_size": 63488 00:15:54.088 }, 00:15:54.088 { 00:15:54.088 "name": "pt3", 00:15:54.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.088 "is_configured": true, 00:15:54.088 "data_offset": 2048, 00:15:54.088 "data_size": 63488 00:15:54.088 } 00:15:54.088 ] 00:15:54.088 }' 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.088 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.349 [2024-11-20 17:50:21.492860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.349 [2024-11-20 17:50:21.492931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.349 [2024-11-20 17:50:21.493034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.349 [2024-11-20 17:50:21.493111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.349 [2024-11-20 17:50:21.493163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.349 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.609 [2024-11-20 17:50:21.580704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.609 [2024-11-20 17:50:21.580794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.609 [2024-11-20 17:50:21.580825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:54.609 [2024-11-20 17:50:21.580856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.609 [2024-11-20 17:50:21.583217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.609 [2024-11-20 17:50:21.583282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.609 [2024-11-20 17:50:21.583372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:54.609 [2024-11-20 17:50:21.583441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.609 pt2 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.609 "name": "raid_bdev1", 00:15:54.609 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:54.609 "strip_size_kb": 64, 00:15:54.609 "state": "configuring", 00:15:54.609 "raid_level": "raid5f", 00:15:54.609 "superblock": true, 00:15:54.609 "num_base_bdevs": 3, 00:15:54.609 "num_base_bdevs_discovered": 1, 00:15:54.609 "num_base_bdevs_operational": 2, 00:15:54.609 "base_bdevs_list": [ 00:15:54.609 { 00:15:54.609 "name": null, 00:15:54.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.609 "is_configured": false, 00:15:54.609 "data_offset": 2048, 00:15:54.609 "data_size": 63488 00:15:54.609 }, 00:15:54.609 { 00:15:54.609 "name": "pt2", 00:15:54.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.609 "is_configured": true, 00:15:54.609 "data_offset": 2048, 00:15:54.609 "data_size": 63488 00:15:54.609 }, 00:15:54.609 { 00:15:54.609 "name": null, 00:15:54.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.609 "is_configured": false, 00:15:54.609 "data_offset": 2048, 00:15:54.609 "data_size": 63488 00:15:54.609 } 00:15:54.609 ] 00:15:54.609 }' 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.609 17:50:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.869 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:54.869 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.869 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:54.869 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.869 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.869 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.869 [2024-11-20 17:50:22.039967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.869 [2024-11-20 17:50:22.040117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.869 [2024-11-20 17:50:22.040160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:54.870 [2024-11-20 17:50:22.040193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.870 [2024-11-20 17:50:22.040734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.870 [2024-11-20 17:50:22.040795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.870 [2024-11-20 17:50:22.040910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:54.870 [2024-11-20 17:50:22.040977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.870 [2024-11-20 17:50:22.041140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.870 [2024-11-20 17:50:22.041184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:54.870 [2024-11-20 17:50:22.041463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:55.129 [2024-11-20 17:50:22.046593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:55.129 [2024-11-20 17:50:22.046645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:55.129 [2024-11-20 17:50:22.047002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.129 pt3 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.129 "name": "raid_bdev1", 00:15:55.129 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:55.129 "strip_size_kb": 64, 00:15:55.129 "state": "online", 00:15:55.129 "raid_level": "raid5f", 00:15:55.129 "superblock": true, 00:15:55.129 "num_base_bdevs": 3, 00:15:55.129 "num_base_bdevs_discovered": 2, 00:15:55.129 "num_base_bdevs_operational": 2, 00:15:55.129 "base_bdevs_list": [ 00:15:55.129 { 00:15:55.129 "name": null, 00:15:55.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.129 "is_configured": false, 00:15:55.129 "data_offset": 2048, 00:15:55.129 "data_size": 63488 00:15:55.129 }, 00:15:55.129 { 00:15:55.129 "name": "pt2", 00:15:55.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.129 "is_configured": true, 00:15:55.129 "data_offset": 2048, 00:15:55.129 "data_size": 63488 00:15:55.129 }, 00:15:55.129 { 00:15:55.129 "name": "pt3", 00:15:55.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.129 "is_configured": true, 00:15:55.129 "data_offset": 2048, 00:15:55.129 "data_size": 63488 00:15:55.129 } 00:15:55.129 ] 00:15:55.129 }' 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.129 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.389 [2024-11-20 17:50:22.521404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.389 [2024-11-20 17:50:22.521502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.389 [2024-11-20 17:50:22.521612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.389 [2024-11-20 17:50:22.521699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.389 [2024-11-20 17:50:22.521743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.389 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.649 [2024-11-20 17:50:22.577301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.649 [2024-11-20 17:50:22.577359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.649 [2024-11-20 17:50:22.577380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:55.649 [2024-11-20 17:50:22.577390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.649 [2024-11-20 17:50:22.579899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.649 [2024-11-20 17:50:22.579934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.649 [2024-11-20 17:50:22.580024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.649 [2024-11-20 17:50:22.580076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.649 [2024-11-20 17:50:22.580235] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:55.649 [2024-11-20 17:50:22.580246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.649 [2024-11-20 17:50:22.580263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:55.649 [2024-11-20 17:50:22.580313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.649 pt1 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.649 "name": "raid_bdev1", 00:15:55.649 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:55.649 "strip_size_kb": 64, 00:15:55.649 "state": "configuring", 00:15:55.649 "raid_level": "raid5f", 00:15:55.649 "superblock": true, 00:15:55.649 "num_base_bdevs": 3, 00:15:55.649 "num_base_bdevs_discovered": 1, 00:15:55.649 "num_base_bdevs_operational": 2, 00:15:55.649 "base_bdevs_list": [ 00:15:55.649 { 00:15:55.649 "name": null, 00:15:55.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.649 "is_configured": false, 00:15:55.649 "data_offset": 2048, 00:15:55.649 "data_size": 63488 00:15:55.649 }, 00:15:55.649 { 00:15:55.649 "name": "pt2", 00:15:55.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.649 "is_configured": true, 00:15:55.649 "data_offset": 2048, 00:15:55.649 "data_size": 63488 00:15:55.649 }, 00:15:55.649 { 00:15:55.649 "name": null, 00:15:55.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.649 "is_configured": false, 00:15:55.649 "data_offset": 2048, 00:15:55.649 "data_size": 63488 00:15:55.649 } 00:15:55.649 ] 00:15:55.649 }' 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.649 17:50:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.909 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 [2024-11-20 17:50:23.052499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.909 [2024-11-20 17:50:23.052593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.909 [2024-11-20 17:50:23.052630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:55.909 [2024-11-20 17:50:23.052656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.909 [2024-11-20 17:50:23.053160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.909 [2024-11-20 17:50:23.053217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.909 [2024-11-20 17:50:23.053313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:55.910 [2024-11-20 17:50:23.053360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.910 [2024-11-20 17:50:23.053500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:55.910 [2024-11-20 17:50:23.053533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:55.910 [2024-11-20 17:50:23.053809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:55.910 [2024-11-20 17:50:23.059035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:55.910 [2024-11-20 17:50:23.059091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:55.910 [2024-11-20 17:50:23.059372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.910 pt3 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.910 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.170 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.170 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.170 "name": "raid_bdev1", 00:15:56.170 "uuid": "92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af", 00:15:56.170 "strip_size_kb": 64, 00:15:56.170 "state": "online", 00:15:56.170 "raid_level": "raid5f", 00:15:56.170 "superblock": true, 00:15:56.170 "num_base_bdevs": 3, 00:15:56.170 "num_base_bdevs_discovered": 2, 00:15:56.170 "num_base_bdevs_operational": 2, 00:15:56.170 "base_bdevs_list": [ 00:15:56.170 { 00:15:56.170 "name": null, 00:15:56.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.170 "is_configured": false, 00:15:56.170 "data_offset": 2048, 00:15:56.170 "data_size": 63488 00:15:56.170 }, 00:15:56.170 { 00:15:56.170 "name": "pt2", 00:15:56.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.170 "is_configured": true, 00:15:56.170 "data_offset": 2048, 00:15:56.170 "data_size": 63488 00:15:56.170 }, 00:15:56.170 { 00:15:56.170 "name": "pt3", 00:15:56.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.170 "is_configured": true, 00:15:56.170 "data_offset": 2048, 00:15:56.170 "data_size": 63488 00:15:56.170 } 00:15:56.170 ] 00:15:56.170 }' 00:15:56.170 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.170 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:56.429 [2024-11-20 17:50:23.573676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.429 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af '!=' 92eb4c8f-2d6d-49e1-a2df-e5c02c6d51af ']' 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81609 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81609 ']' 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81609 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81609 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81609' 00:15:56.688 killing process with pid 81609 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81609 00:15:56.688 [2024-11-20 17:50:23.662883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.688 [2024-11-20 17:50:23.662970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.688 [2024-11-20 17:50:23.663044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.688 [2024-11-20 17:50:23.663057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:56.688 17:50:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81609 00:15:56.948 [2024-11-20 17:50:23.980096] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.326 17:50:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:58.326 00:15:58.326 real 0m7.895s 00:15:58.327 user 0m12.206s 00:15:58.327 sys 0m1.488s 00:15:58.327 ************************************ 00:15:58.327 END TEST raid5f_superblock_test 00:15:58.327 ************************************ 00:15:58.327 17:50:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.327 17:50:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.327 17:50:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:58.327 17:50:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:58.327 17:50:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:58.327 17:50:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.327 17:50:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.327 ************************************ 00:15:58.327 START TEST raid5f_rebuild_test 00:15:58.327 ************************************ 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82054 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82054 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82054 ']' 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.327 17:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.327 [2024-11-20 17:50:25.322118] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:58.327 [2024-11-20 17:50:25.322300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.327 Zero copy mechanism will not be used. 00:15:58.327 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82054 ] 00:15:58.327 [2024-11-20 17:50:25.494826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.586 [2024-11-20 17:50:25.625444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.845 [2024-11-20 17:50:25.851061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.845 [2024-11-20 17:50:25.851233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.103 BaseBdev1_malloc 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.103 [2024-11-20 17:50:26.191264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.103 [2024-11-20 17:50:26.191336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.103 [2024-11-20 17:50:26.191360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.103 [2024-11-20 17:50:26.191372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.103 [2024-11-20 17:50:26.193657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.103 [2024-11-20 17:50:26.193778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.103 BaseBdev1 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.103 BaseBdev2_malloc 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.103 [2024-11-20 17:50:26.252037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.103 [2024-11-20 17:50:26.252152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.103 [2024-11-20 17:50:26.252190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.103 [2024-11-20 17:50:26.252224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.103 [2024-11-20 17:50:26.254482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.103 [2024-11-20 17:50:26.254552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.103 BaseBdev2 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.103 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 BaseBdev3_malloc 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 [2024-11-20 17:50:26.343661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:59.362 [2024-11-20 17:50:26.343759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.362 [2024-11-20 17:50:26.343783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:59.362 [2024-11-20 17:50:26.343796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.362 [2024-11-20 17:50:26.346046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.362 [2024-11-20 17:50:26.346083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.362 BaseBdev3 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 spare_malloc 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 spare_delay 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 [2024-11-20 17:50:26.411746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.362 [2024-11-20 17:50:26.411855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.362 [2024-11-20 17:50:26.411888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:59.362 [2024-11-20 17:50:26.411918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.362 [2024-11-20 17:50:26.414184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.362 [2024-11-20 17:50:26.414258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.362 spare 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 [2024-11-20 17:50:26.423796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.362 [2024-11-20 17:50:26.425733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.362 [2024-11-20 17:50:26.425830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.362 [2024-11-20 17:50:26.425931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.362 [2024-11-20 17:50:26.425965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:59.362 [2024-11-20 17:50:26.426233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:59.362 [2024-11-20 17:50:26.431936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.362 [2024-11-20 17:50:26.431991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.362 [2024-11-20 17:50:26.432216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.362 "name": "raid_bdev1", 00:15:59.362 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:15:59.362 "strip_size_kb": 64, 00:15:59.362 "state": "online", 00:15:59.362 "raid_level": "raid5f", 00:15:59.362 "superblock": false, 00:15:59.362 "num_base_bdevs": 3, 00:15:59.362 "num_base_bdevs_discovered": 3, 00:15:59.362 "num_base_bdevs_operational": 3, 00:15:59.362 "base_bdevs_list": [ 00:15:59.362 { 00:15:59.362 "name": "BaseBdev1", 00:15:59.362 "uuid": "b5654287-d312-5b6f-ad9d-b537c272b600", 00:15:59.362 "is_configured": true, 00:15:59.362 "data_offset": 0, 00:15:59.362 "data_size": 65536 00:15:59.362 }, 00:15:59.362 { 00:15:59.362 "name": "BaseBdev2", 00:15:59.362 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:15:59.362 "is_configured": true, 00:15:59.362 "data_offset": 0, 00:15:59.362 "data_size": 65536 00:15:59.362 }, 00:15:59.362 { 00:15:59.362 "name": "BaseBdev3", 00:15:59.362 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:15:59.362 "is_configured": true, 00:15:59.362 "data_offset": 0, 00:15:59.362 "data_size": 65536 00:15:59.362 } 00:15:59.362 ] 00:15:59.362 }' 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.362 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.930 [2024-11-20 17:50:26.874864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.930 17:50:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:00.188 [2024-11-20 17:50:27.158219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:00.188 /dev/nbd0 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.188 1+0 records in 00:16:00.188 1+0 records out 00:16:00.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330765 s, 12.4 MB/s 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.188 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:00.189 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:00.758 512+0 records in 00:16:00.758 512+0 records out 00:16:00.758 67108864 bytes (67 MB, 64 MiB) copied, 0.427464 s, 157 MB/s 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.758 [2024-11-20 17:50:27.877376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.758 [2024-11-20 17:50:27.896659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.758 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.018 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.018 "name": "raid_bdev1", 00:16:01.018 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:01.018 "strip_size_kb": 64, 00:16:01.018 "state": "online", 00:16:01.018 "raid_level": "raid5f", 00:16:01.018 "superblock": false, 00:16:01.018 "num_base_bdevs": 3, 00:16:01.018 "num_base_bdevs_discovered": 2, 00:16:01.018 "num_base_bdevs_operational": 2, 00:16:01.018 "base_bdevs_list": [ 00:16:01.018 { 00:16:01.018 "name": null, 00:16:01.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.018 "is_configured": false, 00:16:01.018 "data_offset": 0, 00:16:01.018 "data_size": 65536 00:16:01.018 }, 00:16:01.018 { 00:16:01.018 "name": "BaseBdev2", 00:16:01.018 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:01.018 "is_configured": true, 00:16:01.018 "data_offset": 0, 00:16:01.018 "data_size": 65536 00:16:01.018 }, 00:16:01.018 { 00:16:01.018 "name": "BaseBdev3", 00:16:01.018 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:01.018 "is_configured": true, 00:16:01.018 "data_offset": 0, 00:16:01.018 "data_size": 65536 00:16:01.018 } 00:16:01.018 ] 00:16:01.018 }' 00:16:01.018 17:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.018 17:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.277 17:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.277 17:50:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.277 17:50:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.277 [2024-11-20 17:50:28.335851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.277 [2024-11-20 17:50:28.353023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:01.277 17:50:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.277 17:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:01.277 [2024-11-20 17:50:28.360693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.216 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.475 "name": "raid_bdev1", 00:16:02.475 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:02.475 "strip_size_kb": 64, 00:16:02.475 "state": "online", 00:16:02.475 "raid_level": "raid5f", 00:16:02.475 "superblock": false, 00:16:02.475 "num_base_bdevs": 3, 00:16:02.475 "num_base_bdevs_discovered": 3, 00:16:02.475 "num_base_bdevs_operational": 3, 00:16:02.475 "process": { 00:16:02.475 "type": "rebuild", 00:16:02.475 "target": "spare", 00:16:02.475 "progress": { 00:16:02.475 "blocks": 20480, 00:16:02.475 "percent": 15 00:16:02.475 } 00:16:02.475 }, 00:16:02.475 "base_bdevs_list": [ 00:16:02.475 { 00:16:02.475 "name": "spare", 00:16:02.475 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:02.475 "is_configured": true, 00:16:02.475 "data_offset": 0, 00:16:02.475 "data_size": 65536 00:16:02.475 }, 00:16:02.475 { 00:16:02.475 "name": "BaseBdev2", 00:16:02.475 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:02.475 "is_configured": true, 00:16:02.475 "data_offset": 0, 00:16:02.475 "data_size": 65536 00:16:02.475 }, 00:16:02.475 { 00:16:02.475 "name": "BaseBdev3", 00:16:02.475 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:02.475 "is_configured": true, 00:16:02.475 "data_offset": 0, 00:16:02.475 "data_size": 65536 00:16:02.475 } 00:16:02.475 ] 00:16:02.475 }' 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.475 [2024-11-20 17:50:29.515751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.475 [2024-11-20 17:50:29.570646] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.475 [2024-11-20 17:50:29.570755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.475 [2024-11-20 17:50:29.570776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.475 [2024-11-20 17:50:29.570785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.475 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.735 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.735 "name": "raid_bdev1", 00:16:02.735 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:02.735 "strip_size_kb": 64, 00:16:02.735 "state": "online", 00:16:02.735 "raid_level": "raid5f", 00:16:02.735 "superblock": false, 00:16:02.735 "num_base_bdevs": 3, 00:16:02.735 "num_base_bdevs_discovered": 2, 00:16:02.735 "num_base_bdevs_operational": 2, 00:16:02.735 "base_bdevs_list": [ 00:16:02.735 { 00:16:02.735 "name": null, 00:16:02.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.735 "is_configured": false, 00:16:02.735 "data_offset": 0, 00:16:02.735 "data_size": 65536 00:16:02.735 }, 00:16:02.735 { 00:16:02.735 "name": "BaseBdev2", 00:16:02.735 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:02.735 "is_configured": true, 00:16:02.735 "data_offset": 0, 00:16:02.735 "data_size": 65536 00:16:02.735 }, 00:16:02.735 { 00:16:02.735 "name": "BaseBdev3", 00:16:02.735 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:02.735 "is_configured": true, 00:16:02.735 "data_offset": 0, 00:16:02.735 "data_size": 65536 00:16:02.735 } 00:16:02.735 ] 00:16:02.735 }' 00:16:02.735 17:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.735 17:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.000 "name": "raid_bdev1", 00:16:03.000 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:03.000 "strip_size_kb": 64, 00:16:03.000 "state": "online", 00:16:03.000 "raid_level": "raid5f", 00:16:03.000 "superblock": false, 00:16:03.000 "num_base_bdevs": 3, 00:16:03.000 "num_base_bdevs_discovered": 2, 00:16:03.000 "num_base_bdevs_operational": 2, 00:16:03.000 "base_bdevs_list": [ 00:16:03.000 { 00:16:03.000 "name": null, 00:16:03.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.000 "is_configured": false, 00:16:03.000 "data_offset": 0, 00:16:03.000 "data_size": 65536 00:16:03.000 }, 00:16:03.000 { 00:16:03.000 "name": "BaseBdev2", 00:16:03.000 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:03.000 "is_configured": true, 00:16:03.000 "data_offset": 0, 00:16:03.000 "data_size": 65536 00:16:03.000 }, 00:16:03.000 { 00:16:03.000 "name": "BaseBdev3", 00:16:03.000 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:03.000 "is_configured": true, 00:16:03.000 "data_offset": 0, 00:16:03.000 "data_size": 65536 00:16:03.000 } 00:16:03.000 ] 00:16:03.000 }' 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.000 17:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.000 [2024-11-20 17:50:30.159765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.275 [2024-11-20 17:50:30.175314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:03.275 17:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.275 17:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.275 [2024-11-20 17:50:30.182330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.230 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.230 "name": "raid_bdev1", 00:16:04.230 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:04.231 "strip_size_kb": 64, 00:16:04.231 "state": "online", 00:16:04.231 "raid_level": "raid5f", 00:16:04.231 "superblock": false, 00:16:04.231 "num_base_bdevs": 3, 00:16:04.231 "num_base_bdevs_discovered": 3, 00:16:04.231 "num_base_bdevs_operational": 3, 00:16:04.231 "process": { 00:16:04.231 "type": "rebuild", 00:16:04.231 "target": "spare", 00:16:04.231 "progress": { 00:16:04.231 "blocks": 20480, 00:16:04.231 "percent": 15 00:16:04.231 } 00:16:04.231 }, 00:16:04.231 "base_bdevs_list": [ 00:16:04.231 { 00:16:04.231 "name": "spare", 00:16:04.231 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:04.231 "is_configured": true, 00:16:04.231 "data_offset": 0, 00:16:04.231 "data_size": 65536 00:16:04.231 }, 00:16:04.231 { 00:16:04.231 "name": "BaseBdev2", 00:16:04.231 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:04.231 "is_configured": true, 00:16:04.231 "data_offset": 0, 00:16:04.231 "data_size": 65536 00:16:04.231 }, 00:16:04.231 { 00:16:04.231 "name": "BaseBdev3", 00:16:04.231 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:04.231 "is_configured": true, 00:16:04.231 "data_offset": 0, 00:16:04.231 "data_size": 65536 00:16:04.231 } 00:16:04.231 ] 00:16:04.231 }' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.231 "name": "raid_bdev1", 00:16:04.231 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:04.231 "strip_size_kb": 64, 00:16:04.231 "state": "online", 00:16:04.231 "raid_level": "raid5f", 00:16:04.231 "superblock": false, 00:16:04.231 "num_base_bdevs": 3, 00:16:04.231 "num_base_bdevs_discovered": 3, 00:16:04.231 "num_base_bdevs_operational": 3, 00:16:04.231 "process": { 00:16:04.231 "type": "rebuild", 00:16:04.231 "target": "spare", 00:16:04.231 "progress": { 00:16:04.231 "blocks": 22528, 00:16:04.231 "percent": 17 00:16:04.231 } 00:16:04.231 }, 00:16:04.231 "base_bdevs_list": [ 00:16:04.231 { 00:16:04.231 "name": "spare", 00:16:04.231 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:04.231 "is_configured": true, 00:16:04.231 "data_offset": 0, 00:16:04.231 "data_size": 65536 00:16:04.231 }, 00:16:04.231 { 00:16:04.231 "name": "BaseBdev2", 00:16:04.231 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:04.231 "is_configured": true, 00:16:04.231 "data_offset": 0, 00:16:04.231 "data_size": 65536 00:16:04.231 }, 00:16:04.231 { 00:16:04.231 "name": "BaseBdev3", 00:16:04.231 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:04.231 "is_configured": true, 00:16:04.231 "data_offset": 0, 00:16:04.231 "data_size": 65536 00:16:04.231 } 00:16:04.231 ] 00:16:04.231 }' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.231 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.491 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.491 17:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.432 "name": "raid_bdev1", 00:16:05.432 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:05.432 "strip_size_kb": 64, 00:16:05.432 "state": "online", 00:16:05.432 "raid_level": "raid5f", 00:16:05.432 "superblock": false, 00:16:05.432 "num_base_bdevs": 3, 00:16:05.432 "num_base_bdevs_discovered": 3, 00:16:05.432 "num_base_bdevs_operational": 3, 00:16:05.432 "process": { 00:16:05.432 "type": "rebuild", 00:16:05.432 "target": "spare", 00:16:05.432 "progress": { 00:16:05.432 "blocks": 45056, 00:16:05.432 "percent": 34 00:16:05.432 } 00:16:05.432 }, 00:16:05.432 "base_bdevs_list": [ 00:16:05.432 { 00:16:05.432 "name": "spare", 00:16:05.432 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:05.432 "is_configured": true, 00:16:05.432 "data_offset": 0, 00:16:05.432 "data_size": 65536 00:16:05.432 }, 00:16:05.432 { 00:16:05.432 "name": "BaseBdev2", 00:16:05.432 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:05.432 "is_configured": true, 00:16:05.432 "data_offset": 0, 00:16:05.432 "data_size": 65536 00:16:05.432 }, 00:16:05.432 { 00:16:05.432 "name": "BaseBdev3", 00:16:05.432 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:05.432 "is_configured": true, 00:16:05.432 "data_offset": 0, 00:16:05.432 "data_size": 65536 00:16:05.432 } 00:16:05.432 ] 00:16:05.432 }' 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.432 17:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.814 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.814 "name": "raid_bdev1", 00:16:06.814 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:06.814 "strip_size_kb": 64, 00:16:06.814 "state": "online", 00:16:06.814 "raid_level": "raid5f", 00:16:06.814 "superblock": false, 00:16:06.814 "num_base_bdevs": 3, 00:16:06.814 "num_base_bdevs_discovered": 3, 00:16:06.814 "num_base_bdevs_operational": 3, 00:16:06.814 "process": { 00:16:06.814 "type": "rebuild", 00:16:06.814 "target": "spare", 00:16:06.814 "progress": { 00:16:06.814 "blocks": 67584, 00:16:06.814 "percent": 51 00:16:06.814 } 00:16:06.814 }, 00:16:06.814 "base_bdevs_list": [ 00:16:06.814 { 00:16:06.814 "name": "spare", 00:16:06.814 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:06.814 "is_configured": true, 00:16:06.814 "data_offset": 0, 00:16:06.814 "data_size": 65536 00:16:06.814 }, 00:16:06.814 { 00:16:06.814 "name": "BaseBdev2", 00:16:06.814 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:06.814 "is_configured": true, 00:16:06.814 "data_offset": 0, 00:16:06.814 "data_size": 65536 00:16:06.814 }, 00:16:06.814 { 00:16:06.814 "name": "BaseBdev3", 00:16:06.814 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:06.814 "is_configured": true, 00:16:06.814 "data_offset": 0, 00:16:06.814 "data_size": 65536 00:16:06.814 } 00:16:06.814 ] 00:16:06.815 }' 00:16:06.815 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.815 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.815 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.815 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.815 17:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.754 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.754 "name": "raid_bdev1", 00:16:07.754 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:07.754 "strip_size_kb": 64, 00:16:07.754 "state": "online", 00:16:07.754 "raid_level": "raid5f", 00:16:07.754 "superblock": false, 00:16:07.754 "num_base_bdevs": 3, 00:16:07.754 "num_base_bdevs_discovered": 3, 00:16:07.754 "num_base_bdevs_operational": 3, 00:16:07.754 "process": { 00:16:07.754 "type": "rebuild", 00:16:07.754 "target": "spare", 00:16:07.754 "progress": { 00:16:07.754 "blocks": 92160, 00:16:07.754 "percent": 70 00:16:07.754 } 00:16:07.754 }, 00:16:07.754 "base_bdevs_list": [ 00:16:07.754 { 00:16:07.754 "name": "spare", 00:16:07.754 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 }, 00:16:07.754 { 00:16:07.754 "name": "BaseBdev2", 00:16:07.754 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 }, 00:16:07.754 { 00:16:07.754 "name": "BaseBdev3", 00:16:07.754 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 } 00:16:07.754 ] 00:16:07.754 }' 00:16:07.755 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.755 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.755 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.755 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.755 17:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.136 "name": "raid_bdev1", 00:16:09.136 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:09.136 "strip_size_kb": 64, 00:16:09.136 "state": "online", 00:16:09.136 "raid_level": "raid5f", 00:16:09.136 "superblock": false, 00:16:09.136 "num_base_bdevs": 3, 00:16:09.136 "num_base_bdevs_discovered": 3, 00:16:09.136 "num_base_bdevs_operational": 3, 00:16:09.136 "process": { 00:16:09.136 "type": "rebuild", 00:16:09.136 "target": "spare", 00:16:09.136 "progress": { 00:16:09.136 "blocks": 114688, 00:16:09.136 "percent": 87 00:16:09.136 } 00:16:09.136 }, 00:16:09.136 "base_bdevs_list": [ 00:16:09.136 { 00:16:09.136 "name": "spare", 00:16:09.136 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:09.136 "is_configured": true, 00:16:09.136 "data_offset": 0, 00:16:09.136 "data_size": 65536 00:16:09.136 }, 00:16:09.136 { 00:16:09.136 "name": "BaseBdev2", 00:16:09.136 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:09.136 "is_configured": true, 00:16:09.136 "data_offset": 0, 00:16:09.136 "data_size": 65536 00:16:09.136 }, 00:16:09.136 { 00:16:09.136 "name": "BaseBdev3", 00:16:09.136 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:09.136 "is_configured": true, 00:16:09.136 "data_offset": 0, 00:16:09.136 "data_size": 65536 00:16:09.136 } 00:16:09.136 ] 00:16:09.136 }' 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.136 17:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.136 17:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.136 17:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.707 [2024-11-20 17:50:36.630694] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:09.707 [2024-11-20 17:50:36.630826] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:09.707 [2024-11-20 17:50:36.630894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.967 "name": "raid_bdev1", 00:16:09.967 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:09.967 "strip_size_kb": 64, 00:16:09.967 "state": "online", 00:16:09.967 "raid_level": "raid5f", 00:16:09.967 "superblock": false, 00:16:09.967 "num_base_bdevs": 3, 00:16:09.967 "num_base_bdevs_discovered": 3, 00:16:09.967 "num_base_bdevs_operational": 3, 00:16:09.967 "base_bdevs_list": [ 00:16:09.967 { 00:16:09.967 "name": "spare", 00:16:09.967 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:09.967 "is_configured": true, 00:16:09.967 "data_offset": 0, 00:16:09.967 "data_size": 65536 00:16:09.967 }, 00:16:09.967 { 00:16:09.967 "name": "BaseBdev2", 00:16:09.967 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:09.967 "is_configured": true, 00:16:09.967 "data_offset": 0, 00:16:09.967 "data_size": 65536 00:16:09.967 }, 00:16:09.967 { 00:16:09.967 "name": "BaseBdev3", 00:16:09.967 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:09.967 "is_configured": true, 00:16:09.967 "data_offset": 0, 00:16:09.967 "data_size": 65536 00:16:09.967 } 00:16:09.967 ] 00:16:09.967 }' 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:09.967 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.227 "name": "raid_bdev1", 00:16:10.227 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:10.227 "strip_size_kb": 64, 00:16:10.227 "state": "online", 00:16:10.227 "raid_level": "raid5f", 00:16:10.227 "superblock": false, 00:16:10.227 "num_base_bdevs": 3, 00:16:10.227 "num_base_bdevs_discovered": 3, 00:16:10.227 "num_base_bdevs_operational": 3, 00:16:10.227 "base_bdevs_list": [ 00:16:10.227 { 00:16:10.227 "name": "spare", 00:16:10.227 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 0, 00:16:10.227 "data_size": 65536 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": "BaseBdev2", 00:16:10.227 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 0, 00:16:10.227 "data_size": 65536 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": "BaseBdev3", 00:16:10.227 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 0, 00:16:10.227 "data_size": 65536 00:16:10.227 } 00:16:10.227 ] 00:16:10.227 }' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.227 "name": "raid_bdev1", 00:16:10.227 "uuid": "9df510d1-44aa-4843-8a97-3c96f6e1b99b", 00:16:10.227 "strip_size_kb": 64, 00:16:10.227 "state": "online", 00:16:10.227 "raid_level": "raid5f", 00:16:10.227 "superblock": false, 00:16:10.227 "num_base_bdevs": 3, 00:16:10.227 "num_base_bdevs_discovered": 3, 00:16:10.227 "num_base_bdevs_operational": 3, 00:16:10.227 "base_bdevs_list": [ 00:16:10.227 { 00:16:10.227 "name": "spare", 00:16:10.227 "uuid": "bd4a8509-4ab5-544d-8546-9158d4b8183a", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 0, 00:16:10.227 "data_size": 65536 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": "BaseBdev2", 00:16:10.227 "uuid": "81cbabe7-c4fc-51f5-b72e-cde28bd911ce", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 0, 00:16:10.227 "data_size": 65536 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": "BaseBdev3", 00:16:10.227 "uuid": "47859f4a-821a-54da-b9de-69c7c1747f8a", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 0, 00:16:10.227 "data_size": 65536 00:16:10.227 } 00:16:10.227 ] 00:16:10.227 }' 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.227 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.795 [2024-11-20 17:50:37.717331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.795 [2024-11-20 17:50:37.717418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.795 [2024-11-20 17:50:37.717554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.795 [2024-11-20 17:50:37.717658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.795 [2024-11-20 17:50:37.717713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:10.795 17:50:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:11.055 /dev/nbd0 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.055 1+0 records in 00:16:11.055 1+0 records out 00:16:11.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319762 s, 12.8 MB/s 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.055 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:11.316 /dev/nbd1 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.316 1+0 records in 00:16:11.316 1+0 records out 00:16:11.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539619 s, 7.6 MB/s 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.316 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.577 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82054 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82054 ']' 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82054 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.837 17:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82054 00:16:11.837 17:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.837 killing process with pid 82054 00:16:11.837 Received shutdown signal, test time was about 60.000000 seconds 00:16:11.837 00:16:11.837 Latency(us) 00:16:11.837 [2024-11-20T17:50:39.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.837 [2024-11-20T17:50:39.013Z] =================================================================================================================== 00:16:11.837 [2024-11-20T17:50:39.013Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.837 17:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.837 17:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82054' 00:16:11.837 17:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82054 00:16:11.837 [2024-11-20 17:50:39.007497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.837 17:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82054 00:16:12.408 [2024-11-20 17:50:39.420397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.812 17:50:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:13.812 00:16:13.812 real 0m15.355s 00:16:13.812 user 0m18.585s 00:16:13.812 sys 0m2.221s 00:16:13.812 17:50:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.812 17:50:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.812 ************************************ 00:16:13.812 END TEST raid5f_rebuild_test 00:16:13.812 ************************************ 00:16:13.812 17:50:40 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:13.812 17:50:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:13.812 17:50:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.812 17:50:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.812 ************************************ 00:16:13.812 START TEST raid5f_rebuild_test_sb 00:16:13.812 ************************************ 00:16:13.812 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:13.812 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82494 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82494 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82494 ']' 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.813 17:50:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.813 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:13.813 Zero copy mechanism will not be used. 00:16:13.813 [2024-11-20 17:50:40.753617] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:13.813 [2024-11-20 17:50:40.753735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82494 ] 00:16:13.813 [2024-11-20 17:50:40.925834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.073 [2024-11-20 17:50:41.051724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.333 [2024-11-20 17:50:41.269787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.333 [2024-11-20 17:50:41.269961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.593 BaseBdev1_malloc 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.593 [2024-11-20 17:50:41.625585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:14.593 [2024-11-20 17:50:41.625732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.593 [2024-11-20 17:50:41.625760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:14.593 [2024-11-20 17:50:41.625773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.593 [2024-11-20 17:50:41.628084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.593 [2024-11-20 17:50:41.628120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.593 BaseBdev1 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.593 BaseBdev2_malloc 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.593 [2024-11-20 17:50:41.681101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:14.593 [2024-11-20 17:50:41.681171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.593 [2024-11-20 17:50:41.681196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:14.593 [2024-11-20 17:50:41.681220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.593 [2024-11-20 17:50:41.683636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.593 [2024-11-20 17:50:41.683671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:14.593 BaseBdev2 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.593 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 BaseBdev3_malloc 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 [2024-11-20 17:50:41.775492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:14.854 [2024-11-20 17:50:41.775546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.854 [2024-11-20 17:50:41.775569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:14.854 [2024-11-20 17:50:41.775581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.854 [2024-11-20 17:50:41.777905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.854 [2024-11-20 17:50:41.777980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:14.854 BaseBdev3 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 spare_malloc 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 spare_delay 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 [2024-11-20 17:50:41.848151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.854 [2024-11-20 17:50:41.848203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.854 [2024-11-20 17:50:41.848220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:14.854 [2024-11-20 17:50:41.848230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.854 [2024-11-20 17:50:41.850469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.854 [2024-11-20 17:50:41.850507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.854 spare 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 [2024-11-20 17:50:41.860204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.854 [2024-11-20 17:50:41.862189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.854 [2024-11-20 17:50:41.862265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.854 [2024-11-20 17:50:41.862487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:14.854 [2024-11-20 17:50:41.862533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:14.854 [2024-11-20 17:50:41.862788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:14.854 [2024-11-20 17:50:41.867606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:14.854 [2024-11-20 17:50:41.867676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:14.854 [2024-11-20 17:50:41.867876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.854 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.855 "name": "raid_bdev1", 00:16:14.855 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:14.855 "strip_size_kb": 64, 00:16:14.855 "state": "online", 00:16:14.855 "raid_level": "raid5f", 00:16:14.855 "superblock": true, 00:16:14.855 "num_base_bdevs": 3, 00:16:14.855 "num_base_bdevs_discovered": 3, 00:16:14.855 "num_base_bdevs_operational": 3, 00:16:14.855 "base_bdevs_list": [ 00:16:14.855 { 00:16:14.855 "name": "BaseBdev1", 00:16:14.855 "uuid": "1d0ac7ce-f322-59aa-855a-522af72ef490", 00:16:14.855 "is_configured": true, 00:16:14.855 "data_offset": 2048, 00:16:14.855 "data_size": 63488 00:16:14.855 }, 00:16:14.855 { 00:16:14.855 "name": "BaseBdev2", 00:16:14.855 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:14.855 "is_configured": true, 00:16:14.855 "data_offset": 2048, 00:16:14.855 "data_size": 63488 00:16:14.855 }, 00:16:14.855 { 00:16:14.855 "name": "BaseBdev3", 00:16:14.855 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:14.855 "is_configured": true, 00:16:14.855 "data_offset": 2048, 00:16:14.855 "data_size": 63488 00:16:14.855 } 00:16:14.855 ] 00:16:14.855 }' 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.855 17:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:15.424 [2024-11-20 17:50:42.329831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.424 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.425 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:15.425 [2024-11-20 17:50:42.581227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:15.685 /dev/nbd0 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.685 1+0 records in 00:16:15.685 1+0 records out 00:16:15.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353385 s, 11.6 MB/s 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:15.685 17:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:15.945 496+0 records in 00:16:15.945 496+0 records out 00:16:15.945 65011712 bytes (65 MB, 62 MiB) copied, 0.392556 s, 166 MB/s 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.945 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.205 [2024-11-20 17:50:43.261273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.205 [2024-11-20 17:50:43.276496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.205 "name": "raid_bdev1", 00:16:16.205 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:16.205 "strip_size_kb": 64, 00:16:16.205 "state": "online", 00:16:16.205 "raid_level": "raid5f", 00:16:16.205 "superblock": true, 00:16:16.205 "num_base_bdevs": 3, 00:16:16.205 "num_base_bdevs_discovered": 2, 00:16:16.205 "num_base_bdevs_operational": 2, 00:16:16.205 "base_bdevs_list": [ 00:16:16.205 { 00:16:16.205 "name": null, 00:16:16.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.205 "is_configured": false, 00:16:16.205 "data_offset": 0, 00:16:16.205 "data_size": 63488 00:16:16.205 }, 00:16:16.205 { 00:16:16.205 "name": "BaseBdev2", 00:16:16.205 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:16.205 "is_configured": true, 00:16:16.205 "data_offset": 2048, 00:16:16.205 "data_size": 63488 00:16:16.205 }, 00:16:16.205 { 00:16:16.205 "name": "BaseBdev3", 00:16:16.205 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:16.205 "is_configured": true, 00:16:16.205 "data_offset": 2048, 00:16:16.205 "data_size": 63488 00:16:16.205 } 00:16:16.205 ] 00:16:16.205 }' 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.205 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.775 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:16.775 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.775 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.775 [2024-11-20 17:50:43.727709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.775 [2024-11-20 17:50:43.745509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:16.775 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.775 17:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:16.775 [2024-11-20 17:50:43.752822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.715 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.715 "name": "raid_bdev1", 00:16:17.715 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:17.715 "strip_size_kb": 64, 00:16:17.715 "state": "online", 00:16:17.715 "raid_level": "raid5f", 00:16:17.715 "superblock": true, 00:16:17.715 "num_base_bdevs": 3, 00:16:17.715 "num_base_bdevs_discovered": 3, 00:16:17.715 "num_base_bdevs_operational": 3, 00:16:17.715 "process": { 00:16:17.715 "type": "rebuild", 00:16:17.715 "target": "spare", 00:16:17.715 "progress": { 00:16:17.715 "blocks": 20480, 00:16:17.715 "percent": 16 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 "base_bdevs_list": [ 00:16:17.715 { 00:16:17.715 "name": "spare", 00:16:17.715 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:17.715 "is_configured": true, 00:16:17.715 "data_offset": 2048, 00:16:17.715 "data_size": 63488 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "name": "BaseBdev2", 00:16:17.715 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:17.715 "is_configured": true, 00:16:17.715 "data_offset": 2048, 00:16:17.715 "data_size": 63488 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "name": "BaseBdev3", 00:16:17.715 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:17.715 "is_configured": true, 00:16:17.715 "data_offset": 2048, 00:16:17.715 "data_size": 63488 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }' 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.716 17:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.716 [2024-11-20 17:50:44.884042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.975 [2024-11-20 17:50:44.962833] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.975 [2024-11-20 17:50:44.962944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.975 [2024-11-20 17:50:44.962982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.975 [2024-11-20 17:50:44.963003] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.975 "name": "raid_bdev1", 00:16:17.975 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:17.975 "strip_size_kb": 64, 00:16:17.975 "state": "online", 00:16:17.975 "raid_level": "raid5f", 00:16:17.975 "superblock": true, 00:16:17.975 "num_base_bdevs": 3, 00:16:17.975 "num_base_bdevs_discovered": 2, 00:16:17.975 "num_base_bdevs_operational": 2, 00:16:17.975 "base_bdevs_list": [ 00:16:17.975 { 00:16:17.975 "name": null, 00:16:17.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.975 "is_configured": false, 00:16:17.975 "data_offset": 0, 00:16:17.975 "data_size": 63488 00:16:17.975 }, 00:16:17.975 { 00:16:17.975 "name": "BaseBdev2", 00:16:17.975 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:17.975 "is_configured": true, 00:16:17.975 "data_offset": 2048, 00:16:17.975 "data_size": 63488 00:16:17.975 }, 00:16:17.975 { 00:16:17.975 "name": "BaseBdev3", 00:16:17.975 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:17.975 "is_configured": true, 00:16:17.975 "data_offset": 2048, 00:16:17.975 "data_size": 63488 00:16:17.975 } 00:16:17.975 ] 00:16:17.975 }' 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.975 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.494 "name": "raid_bdev1", 00:16:18.494 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:18.494 "strip_size_kb": 64, 00:16:18.494 "state": "online", 00:16:18.494 "raid_level": "raid5f", 00:16:18.494 "superblock": true, 00:16:18.494 "num_base_bdevs": 3, 00:16:18.494 "num_base_bdevs_discovered": 2, 00:16:18.494 "num_base_bdevs_operational": 2, 00:16:18.494 "base_bdevs_list": [ 00:16:18.494 { 00:16:18.494 "name": null, 00:16:18.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.494 "is_configured": false, 00:16:18.494 "data_offset": 0, 00:16:18.494 "data_size": 63488 00:16:18.494 }, 00:16:18.494 { 00:16:18.494 "name": "BaseBdev2", 00:16:18.494 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:18.494 "is_configured": true, 00:16:18.494 "data_offset": 2048, 00:16:18.494 "data_size": 63488 00:16:18.494 }, 00:16:18.494 { 00:16:18.494 "name": "BaseBdev3", 00:16:18.494 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:18.494 "is_configured": true, 00:16:18.494 "data_offset": 2048, 00:16:18.494 "data_size": 63488 00:16:18.494 } 00:16:18.494 ] 00:16:18.494 }' 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.494 [2024-11-20 17:50:45.529170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.494 [2024-11-20 17:50:45.545032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.494 17:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:18.494 [2024-11-20 17:50:45.552157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.434 "name": "raid_bdev1", 00:16:19.434 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:19.434 "strip_size_kb": 64, 00:16:19.434 "state": "online", 00:16:19.434 "raid_level": "raid5f", 00:16:19.434 "superblock": true, 00:16:19.434 "num_base_bdevs": 3, 00:16:19.434 "num_base_bdevs_discovered": 3, 00:16:19.434 "num_base_bdevs_operational": 3, 00:16:19.434 "process": { 00:16:19.434 "type": "rebuild", 00:16:19.434 "target": "spare", 00:16:19.434 "progress": { 00:16:19.434 "blocks": 20480, 00:16:19.434 "percent": 16 00:16:19.434 } 00:16:19.434 }, 00:16:19.434 "base_bdevs_list": [ 00:16:19.434 { 00:16:19.434 "name": "spare", 00:16:19.434 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:19.434 "is_configured": true, 00:16:19.434 "data_offset": 2048, 00:16:19.434 "data_size": 63488 00:16:19.434 }, 00:16:19.434 { 00:16:19.434 "name": "BaseBdev2", 00:16:19.434 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:19.434 "is_configured": true, 00:16:19.434 "data_offset": 2048, 00:16:19.434 "data_size": 63488 00:16:19.434 }, 00:16:19.434 { 00:16:19.434 "name": "BaseBdev3", 00:16:19.434 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:19.434 "is_configured": true, 00:16:19.434 "data_offset": 2048, 00:16:19.434 "data_size": 63488 00:16:19.434 } 00:16:19.434 ] 00:16:19.434 }' 00:16:19.434 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:19.694 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=575 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.694 "name": "raid_bdev1", 00:16:19.694 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:19.694 "strip_size_kb": 64, 00:16:19.694 "state": "online", 00:16:19.694 "raid_level": "raid5f", 00:16:19.694 "superblock": true, 00:16:19.694 "num_base_bdevs": 3, 00:16:19.694 "num_base_bdevs_discovered": 3, 00:16:19.694 "num_base_bdevs_operational": 3, 00:16:19.694 "process": { 00:16:19.694 "type": "rebuild", 00:16:19.694 "target": "spare", 00:16:19.694 "progress": { 00:16:19.694 "blocks": 22528, 00:16:19.694 "percent": 17 00:16:19.694 } 00:16:19.694 }, 00:16:19.694 "base_bdevs_list": [ 00:16:19.694 { 00:16:19.694 "name": "spare", 00:16:19.694 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:19.694 "is_configured": true, 00:16:19.694 "data_offset": 2048, 00:16:19.694 "data_size": 63488 00:16:19.694 }, 00:16:19.694 { 00:16:19.694 "name": "BaseBdev2", 00:16:19.694 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:19.694 "is_configured": true, 00:16:19.694 "data_offset": 2048, 00:16:19.694 "data_size": 63488 00:16:19.694 }, 00:16:19.694 { 00:16:19.694 "name": "BaseBdev3", 00:16:19.694 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:19.694 "is_configured": true, 00:16:19.694 "data_offset": 2048, 00:16:19.694 "data_size": 63488 00:16:19.694 } 00:16:19.694 ] 00:16:19.694 }' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.694 17:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.076 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.076 "name": "raid_bdev1", 00:16:21.076 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:21.076 "strip_size_kb": 64, 00:16:21.076 "state": "online", 00:16:21.076 "raid_level": "raid5f", 00:16:21.076 "superblock": true, 00:16:21.076 "num_base_bdevs": 3, 00:16:21.076 "num_base_bdevs_discovered": 3, 00:16:21.076 "num_base_bdevs_operational": 3, 00:16:21.076 "process": { 00:16:21.076 "type": "rebuild", 00:16:21.077 "target": "spare", 00:16:21.077 "progress": { 00:16:21.077 "blocks": 45056, 00:16:21.077 "percent": 35 00:16:21.077 } 00:16:21.077 }, 00:16:21.077 "base_bdevs_list": [ 00:16:21.077 { 00:16:21.077 "name": "spare", 00:16:21.077 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:21.077 "is_configured": true, 00:16:21.077 "data_offset": 2048, 00:16:21.077 "data_size": 63488 00:16:21.077 }, 00:16:21.077 { 00:16:21.077 "name": "BaseBdev2", 00:16:21.077 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:21.077 "is_configured": true, 00:16:21.077 "data_offset": 2048, 00:16:21.077 "data_size": 63488 00:16:21.077 }, 00:16:21.077 { 00:16:21.077 "name": "BaseBdev3", 00:16:21.077 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:21.077 "is_configured": true, 00:16:21.077 "data_offset": 2048, 00:16:21.077 "data_size": 63488 00:16:21.077 } 00:16:21.077 ] 00:16:21.077 }' 00:16:21.077 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.077 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.077 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.077 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.077 17:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.017 17:50:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.017 17:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.017 "name": "raid_bdev1", 00:16:22.017 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:22.017 "strip_size_kb": 64, 00:16:22.017 "state": "online", 00:16:22.017 "raid_level": "raid5f", 00:16:22.017 "superblock": true, 00:16:22.017 "num_base_bdevs": 3, 00:16:22.017 "num_base_bdevs_discovered": 3, 00:16:22.017 "num_base_bdevs_operational": 3, 00:16:22.017 "process": { 00:16:22.017 "type": "rebuild", 00:16:22.017 "target": "spare", 00:16:22.017 "progress": { 00:16:22.017 "blocks": 67584, 00:16:22.017 "percent": 53 00:16:22.017 } 00:16:22.017 }, 00:16:22.017 "base_bdevs_list": [ 00:16:22.017 { 00:16:22.017 "name": "spare", 00:16:22.017 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:22.017 "is_configured": true, 00:16:22.017 "data_offset": 2048, 00:16:22.017 "data_size": 63488 00:16:22.017 }, 00:16:22.017 { 00:16:22.017 "name": "BaseBdev2", 00:16:22.017 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:22.017 "is_configured": true, 00:16:22.017 "data_offset": 2048, 00:16:22.017 "data_size": 63488 00:16:22.017 }, 00:16:22.017 { 00:16:22.017 "name": "BaseBdev3", 00:16:22.017 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:22.017 "is_configured": true, 00:16:22.017 "data_offset": 2048, 00:16:22.017 "data_size": 63488 00:16:22.017 } 00:16:22.017 ] 00:16:22.017 }' 00:16:22.017 17:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.017 17:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.017 17:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.017 17:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.017 17:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.974 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.975 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.975 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.975 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.975 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.975 "name": "raid_bdev1", 00:16:22.975 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:22.975 "strip_size_kb": 64, 00:16:22.975 "state": "online", 00:16:22.975 "raid_level": "raid5f", 00:16:22.975 "superblock": true, 00:16:22.975 "num_base_bdevs": 3, 00:16:22.975 "num_base_bdevs_discovered": 3, 00:16:22.975 "num_base_bdevs_operational": 3, 00:16:22.975 "process": { 00:16:22.975 "type": "rebuild", 00:16:22.975 "target": "spare", 00:16:22.975 "progress": { 00:16:22.975 "blocks": 90112, 00:16:22.975 "percent": 70 00:16:22.975 } 00:16:22.975 }, 00:16:22.975 "base_bdevs_list": [ 00:16:22.975 { 00:16:22.975 "name": "spare", 00:16:22.975 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:22.975 "is_configured": true, 00:16:22.975 "data_offset": 2048, 00:16:22.975 "data_size": 63488 00:16:22.975 }, 00:16:22.975 { 00:16:22.975 "name": "BaseBdev2", 00:16:22.975 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:22.975 "is_configured": true, 00:16:22.975 "data_offset": 2048, 00:16:22.975 "data_size": 63488 00:16:22.975 }, 00:16:22.975 { 00:16:22.975 "name": "BaseBdev3", 00:16:22.975 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:22.975 "is_configured": true, 00:16:22.975 "data_offset": 2048, 00:16:22.975 "data_size": 63488 00:16:22.975 } 00:16:22.975 ] 00:16:22.975 }' 00:16:22.975 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.233 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.233 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.233 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.233 17:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.173 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.173 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.173 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.174 "name": "raid_bdev1", 00:16:24.174 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:24.174 "strip_size_kb": 64, 00:16:24.174 "state": "online", 00:16:24.174 "raid_level": "raid5f", 00:16:24.174 "superblock": true, 00:16:24.174 "num_base_bdevs": 3, 00:16:24.174 "num_base_bdevs_discovered": 3, 00:16:24.174 "num_base_bdevs_operational": 3, 00:16:24.174 "process": { 00:16:24.174 "type": "rebuild", 00:16:24.174 "target": "spare", 00:16:24.174 "progress": { 00:16:24.174 "blocks": 114688, 00:16:24.174 "percent": 90 00:16:24.174 } 00:16:24.174 }, 00:16:24.174 "base_bdevs_list": [ 00:16:24.174 { 00:16:24.174 "name": "spare", 00:16:24.174 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:24.174 "is_configured": true, 00:16:24.174 "data_offset": 2048, 00:16:24.174 "data_size": 63488 00:16:24.174 }, 00:16:24.174 { 00:16:24.174 "name": "BaseBdev2", 00:16:24.174 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:24.174 "is_configured": true, 00:16:24.174 "data_offset": 2048, 00:16:24.174 "data_size": 63488 00:16:24.174 }, 00:16:24.174 { 00:16:24.174 "name": "BaseBdev3", 00:16:24.174 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:24.174 "is_configured": true, 00:16:24.174 "data_offset": 2048, 00:16:24.174 "data_size": 63488 00:16:24.174 } 00:16:24.174 ] 00:16:24.174 }' 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.174 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.434 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.434 17:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.694 [2024-11-20 17:50:51.803075] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:24.694 [2024-11-20 17:50:51.803211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:24.694 [2024-11-20 17:50:51.803340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.265 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.265 "name": "raid_bdev1", 00:16:25.265 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:25.265 "strip_size_kb": 64, 00:16:25.265 "state": "online", 00:16:25.265 "raid_level": "raid5f", 00:16:25.265 "superblock": true, 00:16:25.265 "num_base_bdevs": 3, 00:16:25.265 "num_base_bdevs_discovered": 3, 00:16:25.265 "num_base_bdevs_operational": 3, 00:16:25.265 "base_bdevs_list": [ 00:16:25.265 { 00:16:25.265 "name": "spare", 00:16:25.265 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:25.265 "is_configured": true, 00:16:25.265 "data_offset": 2048, 00:16:25.265 "data_size": 63488 00:16:25.265 }, 00:16:25.265 { 00:16:25.265 "name": "BaseBdev2", 00:16:25.265 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:25.265 "is_configured": true, 00:16:25.265 "data_offset": 2048, 00:16:25.265 "data_size": 63488 00:16:25.265 }, 00:16:25.265 { 00:16:25.265 "name": "BaseBdev3", 00:16:25.265 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:25.265 "is_configured": true, 00:16:25.265 "data_offset": 2048, 00:16:25.265 "data_size": 63488 00:16:25.265 } 00:16:25.265 ] 00:16:25.265 }' 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.525 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.526 "name": "raid_bdev1", 00:16:25.526 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:25.526 "strip_size_kb": 64, 00:16:25.526 "state": "online", 00:16:25.526 "raid_level": "raid5f", 00:16:25.526 "superblock": true, 00:16:25.526 "num_base_bdevs": 3, 00:16:25.526 "num_base_bdevs_discovered": 3, 00:16:25.526 "num_base_bdevs_operational": 3, 00:16:25.526 "base_bdevs_list": [ 00:16:25.526 { 00:16:25.526 "name": "spare", 00:16:25.526 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 2048, 00:16:25.526 "data_size": 63488 00:16:25.526 }, 00:16:25.526 { 00:16:25.526 "name": "BaseBdev2", 00:16:25.526 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 2048, 00:16:25.526 "data_size": 63488 00:16:25.526 }, 00:16:25.526 { 00:16:25.526 "name": "BaseBdev3", 00:16:25.526 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 2048, 00:16:25.526 "data_size": 63488 00:16:25.526 } 00:16:25.526 ] 00:16:25.526 }' 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.526 "name": "raid_bdev1", 00:16:25.526 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:25.526 "strip_size_kb": 64, 00:16:25.526 "state": "online", 00:16:25.526 "raid_level": "raid5f", 00:16:25.526 "superblock": true, 00:16:25.526 "num_base_bdevs": 3, 00:16:25.526 "num_base_bdevs_discovered": 3, 00:16:25.526 "num_base_bdevs_operational": 3, 00:16:25.526 "base_bdevs_list": [ 00:16:25.526 { 00:16:25.526 "name": "spare", 00:16:25.526 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 2048, 00:16:25.526 "data_size": 63488 00:16:25.526 }, 00:16:25.526 { 00:16:25.526 "name": "BaseBdev2", 00:16:25.526 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 2048, 00:16:25.526 "data_size": 63488 00:16:25.526 }, 00:16:25.526 { 00:16:25.526 "name": "BaseBdev3", 00:16:25.526 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 2048, 00:16:25.526 "data_size": 63488 00:16:25.526 } 00:16:25.526 ] 00:16:25.526 }' 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.526 17:50:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.096 [2024-11-20 17:50:53.065951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.096 [2024-11-20 17:50:53.066083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.096 [2024-11-20 17:50:53.066198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.096 [2024-11-20 17:50:53.066287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.096 [2024-11-20 17:50:53.066304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.096 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:26.356 /dev/nbd0 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.356 1+0 records in 00:16:26.356 1+0 records out 00:16:26.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300928 s, 13.6 MB/s 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.356 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:26.616 /dev/nbd1 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.616 1+0 records in 00:16:26.616 1+0 records out 00:16:26.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330607 s, 12.4 MB/s 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.616 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.876 17:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.137 [2024-11-20 17:50:54.206084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.137 [2024-11-20 17:50:54.206144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.137 [2024-11-20 17:50:54.206165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:27.137 [2024-11-20 17:50:54.206176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.137 [2024-11-20 17:50:54.208538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.137 [2024-11-20 17:50:54.208647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.137 [2024-11-20 17:50:54.208742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:27.137 [2024-11-20 17:50:54.208803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.137 [2024-11-20 17:50:54.208949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.137 [2024-11-20 17:50:54.209089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.137 spare 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.137 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.137 [2024-11-20 17:50:54.308992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:27.137 [2024-11-20 17:50:54.309035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:27.137 [2024-11-20 17:50:54.309363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:27.397 [2024-11-20 17:50:54.314660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:27.397 [2024-11-20 17:50:54.314718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:27.397 [2024-11-20 17:50:54.314960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.397 "name": "raid_bdev1", 00:16:27.397 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:27.397 "strip_size_kb": 64, 00:16:27.397 "state": "online", 00:16:27.397 "raid_level": "raid5f", 00:16:27.397 "superblock": true, 00:16:27.397 "num_base_bdevs": 3, 00:16:27.397 "num_base_bdevs_discovered": 3, 00:16:27.397 "num_base_bdevs_operational": 3, 00:16:27.397 "base_bdevs_list": [ 00:16:27.397 { 00:16:27.397 "name": "spare", 00:16:27.397 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:27.397 "is_configured": true, 00:16:27.397 "data_offset": 2048, 00:16:27.397 "data_size": 63488 00:16:27.397 }, 00:16:27.397 { 00:16:27.397 "name": "BaseBdev2", 00:16:27.397 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:27.397 "is_configured": true, 00:16:27.397 "data_offset": 2048, 00:16:27.397 "data_size": 63488 00:16:27.397 }, 00:16:27.397 { 00:16:27.397 "name": "BaseBdev3", 00:16:27.397 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:27.397 "is_configured": true, 00:16:27.397 "data_offset": 2048, 00:16:27.397 "data_size": 63488 00:16:27.397 } 00:16:27.397 ] 00:16:27.397 }' 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.397 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.657 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.657 "name": "raid_bdev1", 00:16:27.657 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:27.657 "strip_size_kb": 64, 00:16:27.657 "state": "online", 00:16:27.657 "raid_level": "raid5f", 00:16:27.657 "superblock": true, 00:16:27.657 "num_base_bdevs": 3, 00:16:27.657 "num_base_bdevs_discovered": 3, 00:16:27.657 "num_base_bdevs_operational": 3, 00:16:27.657 "base_bdevs_list": [ 00:16:27.657 { 00:16:27.657 "name": "spare", 00:16:27.657 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:27.657 "is_configured": true, 00:16:27.657 "data_offset": 2048, 00:16:27.657 "data_size": 63488 00:16:27.657 }, 00:16:27.657 { 00:16:27.657 "name": "BaseBdev2", 00:16:27.657 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:27.657 "is_configured": true, 00:16:27.657 "data_offset": 2048, 00:16:27.657 "data_size": 63488 00:16:27.657 }, 00:16:27.657 { 00:16:27.657 "name": "BaseBdev3", 00:16:27.657 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:27.658 "is_configured": true, 00:16:27.658 "data_offset": 2048, 00:16:27.658 "data_size": 63488 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }' 00:16:27.658 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.917 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.917 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.917 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.917 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.917 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.918 [2024-11-20 17:50:54.928892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.918 "name": "raid_bdev1", 00:16:27.918 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:27.918 "strip_size_kb": 64, 00:16:27.918 "state": "online", 00:16:27.918 "raid_level": "raid5f", 00:16:27.918 "superblock": true, 00:16:27.918 "num_base_bdevs": 3, 00:16:27.918 "num_base_bdevs_discovered": 2, 00:16:27.918 "num_base_bdevs_operational": 2, 00:16:27.918 "base_bdevs_list": [ 00:16:27.918 { 00:16:27.918 "name": null, 00:16:27.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.918 "is_configured": false, 00:16:27.918 "data_offset": 0, 00:16:27.918 "data_size": 63488 00:16:27.918 }, 00:16:27.918 { 00:16:27.918 "name": "BaseBdev2", 00:16:27.918 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:27.918 "is_configured": true, 00:16:27.918 "data_offset": 2048, 00:16:27.918 "data_size": 63488 00:16:27.918 }, 00:16:27.918 { 00:16:27.918 "name": "BaseBdev3", 00:16:27.918 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:27.918 "is_configured": true, 00:16:27.918 "data_offset": 2048, 00:16:27.918 "data_size": 63488 00:16:27.918 } 00:16:27.918 ] 00:16:27.918 }' 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.918 17:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.488 17:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.488 17:50:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.488 17:50:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.488 [2024-11-20 17:50:55.364155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.488 [2024-11-20 17:50:55.364371] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:28.488 [2024-11-20 17:50:55.364431] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:28.488 [2024-11-20 17:50:55.364497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.488 [2024-11-20 17:50:55.380397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:28.488 17:50:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.488 17:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:28.488 [2024-11-20 17:50:55.387426] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.430 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.430 "name": "raid_bdev1", 00:16:29.430 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:29.430 "strip_size_kb": 64, 00:16:29.430 "state": "online", 00:16:29.430 "raid_level": "raid5f", 00:16:29.430 "superblock": true, 00:16:29.430 "num_base_bdevs": 3, 00:16:29.430 "num_base_bdevs_discovered": 3, 00:16:29.430 "num_base_bdevs_operational": 3, 00:16:29.430 "process": { 00:16:29.430 "type": "rebuild", 00:16:29.430 "target": "spare", 00:16:29.430 "progress": { 00:16:29.430 "blocks": 20480, 00:16:29.430 "percent": 16 00:16:29.430 } 00:16:29.430 }, 00:16:29.430 "base_bdevs_list": [ 00:16:29.430 { 00:16:29.430 "name": "spare", 00:16:29.430 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:29.430 "is_configured": true, 00:16:29.430 "data_offset": 2048, 00:16:29.430 "data_size": 63488 00:16:29.430 }, 00:16:29.430 { 00:16:29.430 "name": "BaseBdev2", 00:16:29.430 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:29.430 "is_configured": true, 00:16:29.430 "data_offset": 2048, 00:16:29.430 "data_size": 63488 00:16:29.430 }, 00:16:29.430 { 00:16:29.430 "name": "BaseBdev3", 00:16:29.430 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:29.430 "is_configured": true, 00:16:29.431 "data_offset": 2048, 00:16:29.431 "data_size": 63488 00:16:29.431 } 00:16:29.431 ] 00:16:29.431 }' 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.431 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.431 [2024-11-20 17:50:56.538722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.431 [2024-11-20 17:50:56.596436] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.431 [2024-11-20 17:50:56.596503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.431 [2024-11-20 17:50:56.596520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.431 [2024-11-20 17:50:56.596530] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.691 "name": "raid_bdev1", 00:16:29.691 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:29.691 "strip_size_kb": 64, 00:16:29.691 "state": "online", 00:16:29.691 "raid_level": "raid5f", 00:16:29.691 "superblock": true, 00:16:29.691 "num_base_bdevs": 3, 00:16:29.691 "num_base_bdevs_discovered": 2, 00:16:29.691 "num_base_bdevs_operational": 2, 00:16:29.691 "base_bdevs_list": [ 00:16:29.691 { 00:16:29.691 "name": null, 00:16:29.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.691 "is_configured": false, 00:16:29.691 "data_offset": 0, 00:16:29.691 "data_size": 63488 00:16:29.691 }, 00:16:29.691 { 00:16:29.691 "name": "BaseBdev2", 00:16:29.691 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:29.691 "is_configured": true, 00:16:29.691 "data_offset": 2048, 00:16:29.691 "data_size": 63488 00:16:29.691 }, 00:16:29.691 { 00:16:29.691 "name": "BaseBdev3", 00:16:29.691 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:29.691 "is_configured": true, 00:16:29.691 "data_offset": 2048, 00:16:29.691 "data_size": 63488 00:16:29.691 } 00:16:29.691 ] 00:16:29.691 }' 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.691 17:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.951 17:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.951 17:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.951 17:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.951 [2024-11-20 17:50:57.089648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.951 [2024-11-20 17:50:57.089763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.951 [2024-11-20 17:50:57.089802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:29.951 [2024-11-20 17:50:57.089834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.951 [2024-11-20 17:50:57.090425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.951 [2024-11-20 17:50:57.090493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.952 [2024-11-20 17:50:57.090630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:29.952 [2024-11-20 17:50:57.090676] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:29.952 [2024-11-20 17:50:57.090715] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:29.952 [2024-11-20 17:50:57.090773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.952 [2024-11-20 17:50:57.106160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:29.952 spare 00:16:29.952 17:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.952 17:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:29.952 [2024-11-20 17:50:57.113340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.331 "name": "raid_bdev1", 00:16:31.331 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:31.331 "strip_size_kb": 64, 00:16:31.331 "state": "online", 00:16:31.331 "raid_level": "raid5f", 00:16:31.331 "superblock": true, 00:16:31.331 "num_base_bdevs": 3, 00:16:31.331 "num_base_bdevs_discovered": 3, 00:16:31.331 "num_base_bdevs_operational": 3, 00:16:31.331 "process": { 00:16:31.331 "type": "rebuild", 00:16:31.331 "target": "spare", 00:16:31.331 "progress": { 00:16:31.331 "blocks": 20480, 00:16:31.331 "percent": 16 00:16:31.331 } 00:16:31.331 }, 00:16:31.331 "base_bdevs_list": [ 00:16:31.331 { 00:16:31.331 "name": "spare", 00:16:31.331 "uuid": "5c8d5d4c-300d-5cee-ab13-4108fe59a23c", 00:16:31.331 "is_configured": true, 00:16:31.331 "data_offset": 2048, 00:16:31.331 "data_size": 63488 00:16:31.331 }, 00:16:31.331 { 00:16:31.331 "name": "BaseBdev2", 00:16:31.331 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:31.331 "is_configured": true, 00:16:31.331 "data_offset": 2048, 00:16:31.331 "data_size": 63488 00:16:31.331 }, 00:16:31.331 { 00:16:31.331 "name": "BaseBdev3", 00:16:31.331 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:31.331 "is_configured": true, 00:16:31.331 "data_offset": 2048, 00:16:31.331 "data_size": 63488 00:16:31.331 } 00:16:31.331 ] 00:16:31.331 }' 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.331 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.331 [2024-11-20 17:50:58.269121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.332 [2024-11-20 17:50:58.322591] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.332 [2024-11-20 17:50:58.322694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.332 [2024-11-20 17:50:58.322731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.332 [2024-11-20 17:50:58.322751] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.332 "name": "raid_bdev1", 00:16:31.332 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:31.332 "strip_size_kb": 64, 00:16:31.332 "state": "online", 00:16:31.332 "raid_level": "raid5f", 00:16:31.332 "superblock": true, 00:16:31.332 "num_base_bdevs": 3, 00:16:31.332 "num_base_bdevs_discovered": 2, 00:16:31.332 "num_base_bdevs_operational": 2, 00:16:31.332 "base_bdevs_list": [ 00:16:31.332 { 00:16:31.332 "name": null, 00:16:31.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.332 "is_configured": false, 00:16:31.332 "data_offset": 0, 00:16:31.332 "data_size": 63488 00:16:31.332 }, 00:16:31.332 { 00:16:31.332 "name": "BaseBdev2", 00:16:31.332 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:31.332 "is_configured": true, 00:16:31.332 "data_offset": 2048, 00:16:31.332 "data_size": 63488 00:16:31.332 }, 00:16:31.332 { 00:16:31.332 "name": "BaseBdev3", 00:16:31.332 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:31.332 "is_configured": true, 00:16:31.332 "data_offset": 2048, 00:16:31.332 "data_size": 63488 00:16:31.332 } 00:16:31.332 ] 00:16:31.332 }' 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.332 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.900 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.900 "name": "raid_bdev1", 00:16:31.900 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:31.900 "strip_size_kb": 64, 00:16:31.900 "state": "online", 00:16:31.900 "raid_level": "raid5f", 00:16:31.900 "superblock": true, 00:16:31.900 "num_base_bdevs": 3, 00:16:31.900 "num_base_bdevs_discovered": 2, 00:16:31.900 "num_base_bdevs_operational": 2, 00:16:31.900 "base_bdevs_list": [ 00:16:31.900 { 00:16:31.900 "name": null, 00:16:31.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.900 "is_configured": false, 00:16:31.900 "data_offset": 0, 00:16:31.901 "data_size": 63488 00:16:31.901 }, 00:16:31.901 { 00:16:31.901 "name": "BaseBdev2", 00:16:31.901 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:31.901 "is_configured": true, 00:16:31.901 "data_offset": 2048, 00:16:31.901 "data_size": 63488 00:16:31.901 }, 00:16:31.901 { 00:16:31.901 "name": "BaseBdev3", 00:16:31.901 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:31.901 "is_configured": true, 00:16:31.901 "data_offset": 2048, 00:16:31.901 "data_size": 63488 00:16:31.901 } 00:16:31.901 ] 00:16:31.901 }' 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.901 [2024-11-20 17:50:58.965985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:31.901 [2024-11-20 17:50:58.966065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.901 [2024-11-20 17:50:58.966093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:31.901 [2024-11-20 17:50:58.966103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.901 [2024-11-20 17:50:58.966660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.901 [2024-11-20 17:50:58.966684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:31.901 [2024-11-20 17:50:58.966779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:31.901 [2024-11-20 17:50:58.966794] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:31.901 [2024-11-20 17:50:58.966821] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:31.901 [2024-11-20 17:50:58.966832] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:31.901 BaseBdev1 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.901 17:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.839 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.840 17:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.840 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.099 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.099 "name": "raid_bdev1", 00:16:33.099 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:33.099 "strip_size_kb": 64, 00:16:33.099 "state": "online", 00:16:33.099 "raid_level": "raid5f", 00:16:33.099 "superblock": true, 00:16:33.099 "num_base_bdevs": 3, 00:16:33.099 "num_base_bdevs_discovered": 2, 00:16:33.099 "num_base_bdevs_operational": 2, 00:16:33.099 "base_bdevs_list": [ 00:16:33.099 { 00:16:33.099 "name": null, 00:16:33.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.099 "is_configured": false, 00:16:33.099 "data_offset": 0, 00:16:33.099 "data_size": 63488 00:16:33.099 }, 00:16:33.099 { 00:16:33.099 "name": "BaseBdev2", 00:16:33.099 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:33.099 "is_configured": true, 00:16:33.099 "data_offset": 2048, 00:16:33.099 "data_size": 63488 00:16:33.099 }, 00:16:33.099 { 00:16:33.099 "name": "BaseBdev3", 00:16:33.099 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:33.099 "is_configured": true, 00:16:33.099 "data_offset": 2048, 00:16:33.099 "data_size": 63488 00:16:33.099 } 00:16:33.099 ] 00:16:33.099 }' 00:16:33.099 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.099 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.359 "name": "raid_bdev1", 00:16:33.359 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:33.359 "strip_size_kb": 64, 00:16:33.359 "state": "online", 00:16:33.359 "raid_level": "raid5f", 00:16:33.359 "superblock": true, 00:16:33.359 "num_base_bdevs": 3, 00:16:33.359 "num_base_bdevs_discovered": 2, 00:16:33.359 "num_base_bdevs_operational": 2, 00:16:33.359 "base_bdevs_list": [ 00:16:33.359 { 00:16:33.359 "name": null, 00:16:33.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.359 "is_configured": false, 00:16:33.359 "data_offset": 0, 00:16:33.359 "data_size": 63488 00:16:33.359 }, 00:16:33.359 { 00:16:33.359 "name": "BaseBdev2", 00:16:33.359 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:33.359 "is_configured": true, 00:16:33.359 "data_offset": 2048, 00:16:33.359 "data_size": 63488 00:16:33.359 }, 00:16:33.359 { 00:16:33.359 "name": "BaseBdev3", 00:16:33.359 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:33.359 "is_configured": true, 00:16:33.359 "data_offset": 2048, 00:16:33.359 "data_size": 63488 00:16:33.359 } 00:16:33.359 ] 00:16:33.359 }' 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.359 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.621 [2024-11-20 17:51:00.551297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.621 [2024-11-20 17:51:00.551532] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.621 [2024-11-20 17:51:00.551593] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:33.621 request: 00:16:33.621 { 00:16:33.621 "base_bdev": "BaseBdev1", 00:16:33.621 "raid_bdev": "raid_bdev1", 00:16:33.621 "method": "bdev_raid_add_base_bdev", 00:16:33.621 "req_id": 1 00:16:33.621 } 00:16:33.621 Got JSON-RPC error response 00:16:33.621 response: 00:16:33.621 { 00:16:33.621 "code": -22, 00:16:33.621 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:33.621 } 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.621 17:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.609 "name": "raid_bdev1", 00:16:34.609 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:34.609 "strip_size_kb": 64, 00:16:34.609 "state": "online", 00:16:34.609 "raid_level": "raid5f", 00:16:34.609 "superblock": true, 00:16:34.609 "num_base_bdevs": 3, 00:16:34.609 "num_base_bdevs_discovered": 2, 00:16:34.609 "num_base_bdevs_operational": 2, 00:16:34.609 "base_bdevs_list": [ 00:16:34.609 { 00:16:34.609 "name": null, 00:16:34.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.609 "is_configured": false, 00:16:34.609 "data_offset": 0, 00:16:34.609 "data_size": 63488 00:16:34.609 }, 00:16:34.609 { 00:16:34.609 "name": "BaseBdev2", 00:16:34.609 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:34.609 "is_configured": true, 00:16:34.609 "data_offset": 2048, 00:16:34.609 "data_size": 63488 00:16:34.609 }, 00:16:34.609 { 00:16:34.609 "name": "BaseBdev3", 00:16:34.609 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:34.609 "is_configured": true, 00:16:34.609 "data_offset": 2048, 00:16:34.609 "data_size": 63488 00:16:34.609 } 00:16:34.609 ] 00:16:34.609 }' 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.609 17:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.869 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.129 "name": "raid_bdev1", 00:16:35.129 "uuid": "d1063e07-894d-41e2-98b5-10bb8034d722", 00:16:35.129 "strip_size_kb": 64, 00:16:35.129 "state": "online", 00:16:35.129 "raid_level": "raid5f", 00:16:35.129 "superblock": true, 00:16:35.129 "num_base_bdevs": 3, 00:16:35.129 "num_base_bdevs_discovered": 2, 00:16:35.129 "num_base_bdevs_operational": 2, 00:16:35.129 "base_bdevs_list": [ 00:16:35.129 { 00:16:35.129 "name": null, 00:16:35.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.129 "is_configured": false, 00:16:35.129 "data_offset": 0, 00:16:35.129 "data_size": 63488 00:16:35.129 }, 00:16:35.129 { 00:16:35.129 "name": "BaseBdev2", 00:16:35.129 "uuid": "bdc6138d-0d70-50cc-8331-41a8643b93f0", 00:16:35.129 "is_configured": true, 00:16:35.129 "data_offset": 2048, 00:16:35.129 "data_size": 63488 00:16:35.129 }, 00:16:35.129 { 00:16:35.129 "name": "BaseBdev3", 00:16:35.129 "uuid": "68f0cb17-473e-5c76-a2fe-e7a0219d7c9f", 00:16:35.129 "is_configured": true, 00:16:35.129 "data_offset": 2048, 00:16:35.129 "data_size": 63488 00:16:35.129 } 00:16:35.129 ] 00:16:35.129 }' 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82494 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82494 ']' 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82494 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82494 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.129 killing process with pid 82494 00:16:35.129 Received shutdown signal, test time was about 60.000000 seconds 00:16:35.129 00:16:35.129 Latency(us) 00:16:35.129 [2024-11-20T17:51:02.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.129 [2024-11-20T17:51:02.305Z] =================================================================================================================== 00:16:35.129 [2024-11-20T17:51:02.305Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82494' 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82494 00:16:35.129 [2024-11-20 17:51:02.191710] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.129 [2024-11-20 17:51:02.191845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.129 17:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82494 00:16:35.129 [2024-11-20 17:51:02.191916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.129 [2024-11-20 17:51:02.191929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:35.700 [2024-11-20 17:51:02.603631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.641 17:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:36.641 00:16:36.641 real 0m23.085s 00:16:36.641 user 0m29.351s 00:16:36.641 sys 0m2.779s 00:16:36.641 17:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.641 17:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 ************************************ 00:16:36.641 END TEST raid5f_rebuild_test_sb 00:16:36.641 ************************************ 00:16:36.642 17:51:03 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:36.642 17:51:03 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:36.642 17:51:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:36.642 17:51:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.642 17:51:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 ************************************ 00:16:36.903 START TEST raid5f_state_function_test 00:16:36.903 ************************************ 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83248 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83248' 00:16:36.903 Process raid pid: 83248 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83248 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83248 ']' 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.903 17:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 [2024-11-20 17:51:03.916901] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:36.903 [2024-11-20 17:51:03.917133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.163 [2024-11-20 17:51:04.088967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.163 [2024-11-20 17:51:04.222782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.423 [2024-11-20 17:51:04.444893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.423 [2024-11-20 17:51:04.444943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.683 [2024-11-20 17:51:04.748437] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.683 [2024-11-20 17:51:04.748561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.683 [2024-11-20 17:51:04.748575] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.683 [2024-11-20 17:51:04.748586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.683 [2024-11-20 17:51:04.748593] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.683 [2024-11-20 17:51:04.748603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.683 [2024-11-20 17:51:04.748609] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.683 [2024-11-20 17:51:04.748618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.683 "name": "Existed_Raid", 00:16:37.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.683 "strip_size_kb": 64, 00:16:37.683 "state": "configuring", 00:16:37.683 "raid_level": "raid5f", 00:16:37.683 "superblock": false, 00:16:37.683 "num_base_bdevs": 4, 00:16:37.683 "num_base_bdevs_discovered": 0, 00:16:37.683 "num_base_bdevs_operational": 4, 00:16:37.683 "base_bdevs_list": [ 00:16:37.683 { 00:16:37.683 "name": "BaseBdev1", 00:16:37.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.683 "is_configured": false, 00:16:37.683 "data_offset": 0, 00:16:37.683 "data_size": 0 00:16:37.683 }, 00:16:37.683 { 00:16:37.683 "name": "BaseBdev2", 00:16:37.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.683 "is_configured": false, 00:16:37.683 "data_offset": 0, 00:16:37.683 "data_size": 0 00:16:37.683 }, 00:16:37.683 { 00:16:37.683 "name": "BaseBdev3", 00:16:37.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.683 "is_configured": false, 00:16:37.683 "data_offset": 0, 00:16:37.683 "data_size": 0 00:16:37.683 }, 00:16:37.683 { 00:16:37.683 "name": "BaseBdev4", 00:16:37.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.683 "is_configured": false, 00:16:37.683 "data_offset": 0, 00:16:37.683 "data_size": 0 00:16:37.683 } 00:16:37.683 ] 00:16:37.683 }' 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.683 17:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.253 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.253 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.253 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.253 [2024-11-20 17:51:05.211539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.254 [2024-11-20 17:51:05.211618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.254 [2024-11-20 17:51:05.223528] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.254 [2024-11-20 17:51:05.223599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.254 [2024-11-20 17:51:05.223623] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.254 [2024-11-20 17:51:05.223644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.254 [2024-11-20 17:51:05.223660] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.254 [2024-11-20 17:51:05.223680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.254 [2024-11-20 17:51:05.223696] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:38.254 [2024-11-20 17:51:05.223715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.254 [2024-11-20 17:51:05.272623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.254 BaseBdev1 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.254 [ 00:16:38.254 { 00:16:38.254 "name": "BaseBdev1", 00:16:38.254 "aliases": [ 00:16:38.254 "b67729bc-5474-4259-8fb5-a9fe4d0c039d" 00:16:38.254 ], 00:16:38.254 "product_name": "Malloc disk", 00:16:38.254 "block_size": 512, 00:16:38.254 "num_blocks": 65536, 00:16:38.254 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:38.254 "assigned_rate_limits": { 00:16:38.254 "rw_ios_per_sec": 0, 00:16:38.254 "rw_mbytes_per_sec": 0, 00:16:38.254 "r_mbytes_per_sec": 0, 00:16:38.254 "w_mbytes_per_sec": 0 00:16:38.254 }, 00:16:38.254 "claimed": true, 00:16:38.254 "claim_type": "exclusive_write", 00:16:38.254 "zoned": false, 00:16:38.254 "supported_io_types": { 00:16:38.254 "read": true, 00:16:38.254 "write": true, 00:16:38.254 "unmap": true, 00:16:38.254 "flush": true, 00:16:38.254 "reset": true, 00:16:38.254 "nvme_admin": false, 00:16:38.254 "nvme_io": false, 00:16:38.254 "nvme_io_md": false, 00:16:38.254 "write_zeroes": true, 00:16:38.254 "zcopy": true, 00:16:38.254 "get_zone_info": false, 00:16:38.254 "zone_management": false, 00:16:38.254 "zone_append": false, 00:16:38.254 "compare": false, 00:16:38.254 "compare_and_write": false, 00:16:38.254 "abort": true, 00:16:38.254 "seek_hole": false, 00:16:38.254 "seek_data": false, 00:16:38.254 "copy": true, 00:16:38.254 "nvme_iov_md": false 00:16:38.254 }, 00:16:38.254 "memory_domains": [ 00:16:38.254 { 00:16:38.254 "dma_device_id": "system", 00:16:38.254 "dma_device_type": 1 00:16:38.254 }, 00:16:38.254 { 00:16:38.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.254 "dma_device_type": 2 00:16:38.254 } 00:16:38.254 ], 00:16:38.254 "driver_specific": {} 00:16:38.254 } 00:16:38.254 ] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.254 "name": "Existed_Raid", 00:16:38.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.254 "strip_size_kb": 64, 00:16:38.254 "state": "configuring", 00:16:38.254 "raid_level": "raid5f", 00:16:38.254 "superblock": false, 00:16:38.254 "num_base_bdevs": 4, 00:16:38.254 "num_base_bdevs_discovered": 1, 00:16:38.254 "num_base_bdevs_operational": 4, 00:16:38.254 "base_bdevs_list": [ 00:16:38.254 { 00:16:38.254 "name": "BaseBdev1", 00:16:38.254 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:38.254 "is_configured": true, 00:16:38.254 "data_offset": 0, 00:16:38.254 "data_size": 65536 00:16:38.254 }, 00:16:38.254 { 00:16:38.254 "name": "BaseBdev2", 00:16:38.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.254 "is_configured": false, 00:16:38.254 "data_offset": 0, 00:16:38.254 "data_size": 0 00:16:38.254 }, 00:16:38.254 { 00:16:38.254 "name": "BaseBdev3", 00:16:38.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.254 "is_configured": false, 00:16:38.254 "data_offset": 0, 00:16:38.254 "data_size": 0 00:16:38.254 }, 00:16:38.254 { 00:16:38.254 "name": "BaseBdev4", 00:16:38.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.254 "is_configured": false, 00:16:38.254 "data_offset": 0, 00:16:38.254 "data_size": 0 00:16:38.254 } 00:16:38.254 ] 00:16:38.254 }' 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.254 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.824 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.825 [2024-11-20 17:51:05.751867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.825 [2024-11-20 17:51:05.751933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.825 [2024-11-20 17:51:05.763898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.825 [2024-11-20 17:51:05.765992] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.825 [2024-11-20 17:51:05.766054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.825 [2024-11-20 17:51:05.766065] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.825 [2024-11-20 17:51:05.766076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.825 [2024-11-20 17:51:05.766083] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:38.825 [2024-11-20 17:51:05.766091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.825 "name": "Existed_Raid", 00:16:38.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.825 "strip_size_kb": 64, 00:16:38.825 "state": "configuring", 00:16:38.825 "raid_level": "raid5f", 00:16:38.825 "superblock": false, 00:16:38.825 "num_base_bdevs": 4, 00:16:38.825 "num_base_bdevs_discovered": 1, 00:16:38.825 "num_base_bdevs_operational": 4, 00:16:38.825 "base_bdevs_list": [ 00:16:38.825 { 00:16:38.825 "name": "BaseBdev1", 00:16:38.825 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:38.825 "is_configured": true, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 65536 00:16:38.825 }, 00:16:38.825 { 00:16:38.825 "name": "BaseBdev2", 00:16:38.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.825 "is_configured": false, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 0 00:16:38.825 }, 00:16:38.825 { 00:16:38.825 "name": "BaseBdev3", 00:16:38.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.825 "is_configured": false, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 0 00:16:38.825 }, 00:16:38.825 { 00:16:38.825 "name": "BaseBdev4", 00:16:38.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.825 "is_configured": false, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 0 00:16:38.825 } 00:16:38.825 ] 00:16:38.825 }' 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.825 17:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.084 [2024-11-20 17:51:06.243766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.084 BaseBdev2 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.084 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.345 [ 00:16:39.345 { 00:16:39.345 "name": "BaseBdev2", 00:16:39.345 "aliases": [ 00:16:39.345 "05d6e730-7a54-490e-af24-04a1ad7a252f" 00:16:39.345 ], 00:16:39.345 "product_name": "Malloc disk", 00:16:39.345 "block_size": 512, 00:16:39.345 "num_blocks": 65536, 00:16:39.345 "uuid": "05d6e730-7a54-490e-af24-04a1ad7a252f", 00:16:39.345 "assigned_rate_limits": { 00:16:39.345 "rw_ios_per_sec": 0, 00:16:39.345 "rw_mbytes_per_sec": 0, 00:16:39.345 "r_mbytes_per_sec": 0, 00:16:39.345 "w_mbytes_per_sec": 0 00:16:39.345 }, 00:16:39.345 "claimed": true, 00:16:39.345 "claim_type": "exclusive_write", 00:16:39.345 "zoned": false, 00:16:39.345 "supported_io_types": { 00:16:39.345 "read": true, 00:16:39.345 "write": true, 00:16:39.345 "unmap": true, 00:16:39.345 "flush": true, 00:16:39.345 "reset": true, 00:16:39.345 "nvme_admin": false, 00:16:39.345 "nvme_io": false, 00:16:39.345 "nvme_io_md": false, 00:16:39.345 "write_zeroes": true, 00:16:39.345 "zcopy": true, 00:16:39.345 "get_zone_info": false, 00:16:39.345 "zone_management": false, 00:16:39.345 "zone_append": false, 00:16:39.345 "compare": false, 00:16:39.345 "compare_and_write": false, 00:16:39.345 "abort": true, 00:16:39.345 "seek_hole": false, 00:16:39.345 "seek_data": false, 00:16:39.345 "copy": true, 00:16:39.345 "nvme_iov_md": false 00:16:39.345 }, 00:16:39.345 "memory_domains": [ 00:16:39.345 { 00:16:39.345 "dma_device_id": "system", 00:16:39.345 "dma_device_type": 1 00:16:39.345 }, 00:16:39.345 { 00:16:39.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.345 "dma_device_type": 2 00:16:39.345 } 00:16:39.345 ], 00:16:39.345 "driver_specific": {} 00:16:39.345 } 00:16:39.345 ] 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.345 "name": "Existed_Raid", 00:16:39.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.345 "strip_size_kb": 64, 00:16:39.345 "state": "configuring", 00:16:39.345 "raid_level": "raid5f", 00:16:39.345 "superblock": false, 00:16:39.345 "num_base_bdevs": 4, 00:16:39.345 "num_base_bdevs_discovered": 2, 00:16:39.345 "num_base_bdevs_operational": 4, 00:16:39.345 "base_bdevs_list": [ 00:16:39.345 { 00:16:39.345 "name": "BaseBdev1", 00:16:39.345 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:39.345 "is_configured": true, 00:16:39.345 "data_offset": 0, 00:16:39.345 "data_size": 65536 00:16:39.345 }, 00:16:39.345 { 00:16:39.345 "name": "BaseBdev2", 00:16:39.345 "uuid": "05d6e730-7a54-490e-af24-04a1ad7a252f", 00:16:39.345 "is_configured": true, 00:16:39.345 "data_offset": 0, 00:16:39.345 "data_size": 65536 00:16:39.345 }, 00:16:39.345 { 00:16:39.345 "name": "BaseBdev3", 00:16:39.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.345 "is_configured": false, 00:16:39.345 "data_offset": 0, 00:16:39.345 "data_size": 0 00:16:39.345 }, 00:16:39.345 { 00:16:39.345 "name": "BaseBdev4", 00:16:39.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.345 "is_configured": false, 00:16:39.345 "data_offset": 0, 00:16:39.345 "data_size": 0 00:16:39.345 } 00:16:39.345 ] 00:16:39.345 }' 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.345 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.606 [2024-11-20 17:51:06.755939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.606 BaseBdev3 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.606 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.866 [ 00:16:39.866 { 00:16:39.866 "name": "BaseBdev3", 00:16:39.866 "aliases": [ 00:16:39.866 "ed759d49-3516-449a-92dd-6e7885697234" 00:16:39.866 ], 00:16:39.866 "product_name": "Malloc disk", 00:16:39.866 "block_size": 512, 00:16:39.866 "num_blocks": 65536, 00:16:39.866 "uuid": "ed759d49-3516-449a-92dd-6e7885697234", 00:16:39.866 "assigned_rate_limits": { 00:16:39.866 "rw_ios_per_sec": 0, 00:16:39.866 "rw_mbytes_per_sec": 0, 00:16:39.866 "r_mbytes_per_sec": 0, 00:16:39.866 "w_mbytes_per_sec": 0 00:16:39.866 }, 00:16:39.866 "claimed": true, 00:16:39.866 "claim_type": "exclusive_write", 00:16:39.866 "zoned": false, 00:16:39.866 "supported_io_types": { 00:16:39.866 "read": true, 00:16:39.866 "write": true, 00:16:39.866 "unmap": true, 00:16:39.866 "flush": true, 00:16:39.866 "reset": true, 00:16:39.866 "nvme_admin": false, 00:16:39.866 "nvme_io": false, 00:16:39.866 "nvme_io_md": false, 00:16:39.866 "write_zeroes": true, 00:16:39.866 "zcopy": true, 00:16:39.866 "get_zone_info": false, 00:16:39.866 "zone_management": false, 00:16:39.866 "zone_append": false, 00:16:39.866 "compare": false, 00:16:39.866 "compare_and_write": false, 00:16:39.866 "abort": true, 00:16:39.866 "seek_hole": false, 00:16:39.866 "seek_data": false, 00:16:39.866 "copy": true, 00:16:39.866 "nvme_iov_md": false 00:16:39.866 }, 00:16:39.866 "memory_domains": [ 00:16:39.866 { 00:16:39.866 "dma_device_id": "system", 00:16:39.866 "dma_device_type": 1 00:16:39.866 }, 00:16:39.866 { 00:16:39.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.866 "dma_device_type": 2 00:16:39.866 } 00:16:39.866 ], 00:16:39.866 "driver_specific": {} 00:16:39.866 } 00:16:39.866 ] 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.866 "name": "Existed_Raid", 00:16:39.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.866 "strip_size_kb": 64, 00:16:39.866 "state": "configuring", 00:16:39.866 "raid_level": "raid5f", 00:16:39.866 "superblock": false, 00:16:39.866 "num_base_bdevs": 4, 00:16:39.866 "num_base_bdevs_discovered": 3, 00:16:39.866 "num_base_bdevs_operational": 4, 00:16:39.866 "base_bdevs_list": [ 00:16:39.866 { 00:16:39.866 "name": "BaseBdev1", 00:16:39.866 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:39.866 "is_configured": true, 00:16:39.866 "data_offset": 0, 00:16:39.866 "data_size": 65536 00:16:39.866 }, 00:16:39.866 { 00:16:39.866 "name": "BaseBdev2", 00:16:39.866 "uuid": "05d6e730-7a54-490e-af24-04a1ad7a252f", 00:16:39.866 "is_configured": true, 00:16:39.866 "data_offset": 0, 00:16:39.866 "data_size": 65536 00:16:39.866 }, 00:16:39.866 { 00:16:39.866 "name": "BaseBdev3", 00:16:39.866 "uuid": "ed759d49-3516-449a-92dd-6e7885697234", 00:16:39.866 "is_configured": true, 00:16:39.866 "data_offset": 0, 00:16:39.866 "data_size": 65536 00:16:39.866 }, 00:16:39.866 { 00:16:39.866 "name": "BaseBdev4", 00:16:39.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.866 "is_configured": false, 00:16:39.866 "data_offset": 0, 00:16:39.866 "data_size": 0 00:16:39.866 } 00:16:39.866 ] 00:16:39.866 }' 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.866 17:51:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.127 [2024-11-20 17:51:07.255906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.127 [2024-11-20 17:51:07.256057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:40.127 [2024-11-20 17:51:07.256074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:40.127 [2024-11-20 17:51:07.256401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:40.127 [2024-11-20 17:51:07.263350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:40.127 [2024-11-20 17:51:07.263408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:40.127 [2024-11-20 17:51:07.263739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.127 BaseBdev4 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.127 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.127 [ 00:16:40.127 { 00:16:40.127 "name": "BaseBdev4", 00:16:40.127 "aliases": [ 00:16:40.127 "408b81c6-0c8e-4519-9eda-c4fe91587580" 00:16:40.127 ], 00:16:40.127 "product_name": "Malloc disk", 00:16:40.127 "block_size": 512, 00:16:40.127 "num_blocks": 65536, 00:16:40.127 "uuid": "408b81c6-0c8e-4519-9eda-c4fe91587580", 00:16:40.127 "assigned_rate_limits": { 00:16:40.127 "rw_ios_per_sec": 0, 00:16:40.127 "rw_mbytes_per_sec": 0, 00:16:40.127 "r_mbytes_per_sec": 0, 00:16:40.127 "w_mbytes_per_sec": 0 00:16:40.127 }, 00:16:40.127 "claimed": true, 00:16:40.127 "claim_type": "exclusive_write", 00:16:40.127 "zoned": false, 00:16:40.127 "supported_io_types": { 00:16:40.127 "read": true, 00:16:40.127 "write": true, 00:16:40.127 "unmap": true, 00:16:40.127 "flush": true, 00:16:40.127 "reset": true, 00:16:40.127 "nvme_admin": false, 00:16:40.127 "nvme_io": false, 00:16:40.127 "nvme_io_md": false, 00:16:40.127 "write_zeroes": true, 00:16:40.127 "zcopy": true, 00:16:40.127 "get_zone_info": false, 00:16:40.127 "zone_management": false, 00:16:40.127 "zone_append": false, 00:16:40.127 "compare": false, 00:16:40.127 "compare_and_write": false, 00:16:40.127 "abort": true, 00:16:40.127 "seek_hole": false, 00:16:40.127 "seek_data": false, 00:16:40.127 "copy": true, 00:16:40.127 "nvme_iov_md": false 00:16:40.127 }, 00:16:40.127 "memory_domains": [ 00:16:40.127 { 00:16:40.127 "dma_device_id": "system", 00:16:40.127 "dma_device_type": 1 00:16:40.127 }, 00:16:40.127 { 00:16:40.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.387 "dma_device_type": 2 00:16:40.387 } 00:16:40.387 ], 00:16:40.387 "driver_specific": {} 00:16:40.387 } 00:16:40.387 ] 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.387 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.387 "name": "Existed_Raid", 00:16:40.387 "uuid": "68502b25-0f03-44f5-9776-f7375632f2b4", 00:16:40.387 "strip_size_kb": 64, 00:16:40.387 "state": "online", 00:16:40.387 "raid_level": "raid5f", 00:16:40.387 "superblock": false, 00:16:40.387 "num_base_bdevs": 4, 00:16:40.387 "num_base_bdevs_discovered": 4, 00:16:40.388 "num_base_bdevs_operational": 4, 00:16:40.388 "base_bdevs_list": [ 00:16:40.388 { 00:16:40.388 "name": "BaseBdev1", 00:16:40.388 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:40.388 "is_configured": true, 00:16:40.388 "data_offset": 0, 00:16:40.388 "data_size": 65536 00:16:40.388 }, 00:16:40.388 { 00:16:40.388 "name": "BaseBdev2", 00:16:40.388 "uuid": "05d6e730-7a54-490e-af24-04a1ad7a252f", 00:16:40.388 "is_configured": true, 00:16:40.388 "data_offset": 0, 00:16:40.388 "data_size": 65536 00:16:40.388 }, 00:16:40.388 { 00:16:40.388 "name": "BaseBdev3", 00:16:40.388 "uuid": "ed759d49-3516-449a-92dd-6e7885697234", 00:16:40.388 "is_configured": true, 00:16:40.388 "data_offset": 0, 00:16:40.388 "data_size": 65536 00:16:40.388 }, 00:16:40.388 { 00:16:40.388 "name": "BaseBdev4", 00:16:40.388 "uuid": "408b81c6-0c8e-4519-9eda-c4fe91587580", 00:16:40.388 "is_configured": true, 00:16:40.388 "data_offset": 0, 00:16:40.388 "data_size": 65536 00:16:40.388 } 00:16:40.388 ] 00:16:40.388 }' 00:16:40.388 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.388 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 [2024-11-20 17:51:07.743941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.648 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:40.648 "name": "Existed_Raid", 00:16:40.648 "aliases": [ 00:16:40.648 "68502b25-0f03-44f5-9776-f7375632f2b4" 00:16:40.648 ], 00:16:40.648 "product_name": "Raid Volume", 00:16:40.648 "block_size": 512, 00:16:40.648 "num_blocks": 196608, 00:16:40.648 "uuid": "68502b25-0f03-44f5-9776-f7375632f2b4", 00:16:40.648 "assigned_rate_limits": { 00:16:40.648 "rw_ios_per_sec": 0, 00:16:40.648 "rw_mbytes_per_sec": 0, 00:16:40.648 "r_mbytes_per_sec": 0, 00:16:40.648 "w_mbytes_per_sec": 0 00:16:40.648 }, 00:16:40.648 "claimed": false, 00:16:40.648 "zoned": false, 00:16:40.648 "supported_io_types": { 00:16:40.648 "read": true, 00:16:40.648 "write": true, 00:16:40.648 "unmap": false, 00:16:40.648 "flush": false, 00:16:40.648 "reset": true, 00:16:40.648 "nvme_admin": false, 00:16:40.648 "nvme_io": false, 00:16:40.648 "nvme_io_md": false, 00:16:40.648 "write_zeroes": true, 00:16:40.648 "zcopy": false, 00:16:40.648 "get_zone_info": false, 00:16:40.648 "zone_management": false, 00:16:40.648 "zone_append": false, 00:16:40.648 "compare": false, 00:16:40.648 "compare_and_write": false, 00:16:40.648 "abort": false, 00:16:40.648 "seek_hole": false, 00:16:40.648 "seek_data": false, 00:16:40.648 "copy": false, 00:16:40.648 "nvme_iov_md": false 00:16:40.648 }, 00:16:40.648 "driver_specific": { 00:16:40.648 "raid": { 00:16:40.648 "uuid": "68502b25-0f03-44f5-9776-f7375632f2b4", 00:16:40.648 "strip_size_kb": 64, 00:16:40.648 "state": "online", 00:16:40.648 "raid_level": "raid5f", 00:16:40.648 "superblock": false, 00:16:40.648 "num_base_bdevs": 4, 00:16:40.648 "num_base_bdevs_discovered": 4, 00:16:40.648 "num_base_bdevs_operational": 4, 00:16:40.648 "base_bdevs_list": [ 00:16:40.648 { 00:16:40.648 "name": "BaseBdev1", 00:16:40.648 "uuid": "b67729bc-5474-4259-8fb5-a9fe4d0c039d", 00:16:40.649 "is_configured": true, 00:16:40.649 "data_offset": 0, 00:16:40.649 "data_size": 65536 00:16:40.649 }, 00:16:40.649 { 00:16:40.649 "name": "BaseBdev2", 00:16:40.649 "uuid": "05d6e730-7a54-490e-af24-04a1ad7a252f", 00:16:40.649 "is_configured": true, 00:16:40.649 "data_offset": 0, 00:16:40.649 "data_size": 65536 00:16:40.649 }, 00:16:40.649 { 00:16:40.649 "name": "BaseBdev3", 00:16:40.649 "uuid": "ed759d49-3516-449a-92dd-6e7885697234", 00:16:40.649 "is_configured": true, 00:16:40.649 "data_offset": 0, 00:16:40.649 "data_size": 65536 00:16:40.649 }, 00:16:40.649 { 00:16:40.649 "name": "BaseBdev4", 00:16:40.649 "uuid": "408b81c6-0c8e-4519-9eda-c4fe91587580", 00:16:40.649 "is_configured": true, 00:16:40.649 "data_offset": 0, 00:16:40.649 "data_size": 65536 00:16:40.649 } 00:16:40.649 ] 00:16:40.649 } 00:16:40.649 } 00:16:40.649 }' 00:16:40.649 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.649 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:40.649 BaseBdev2 00:16:40.649 BaseBdev3 00:16:40.649 BaseBdev4' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.909 17:51:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.909 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.909 [2024-11-20 17:51:08.067187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.169 "name": "Existed_Raid", 00:16:41.169 "uuid": "68502b25-0f03-44f5-9776-f7375632f2b4", 00:16:41.169 "strip_size_kb": 64, 00:16:41.169 "state": "online", 00:16:41.169 "raid_level": "raid5f", 00:16:41.169 "superblock": false, 00:16:41.169 "num_base_bdevs": 4, 00:16:41.169 "num_base_bdevs_discovered": 3, 00:16:41.169 "num_base_bdevs_operational": 3, 00:16:41.169 "base_bdevs_list": [ 00:16:41.169 { 00:16:41.169 "name": null, 00:16:41.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.169 "is_configured": false, 00:16:41.169 "data_offset": 0, 00:16:41.169 "data_size": 65536 00:16:41.169 }, 00:16:41.169 { 00:16:41.169 "name": "BaseBdev2", 00:16:41.169 "uuid": "05d6e730-7a54-490e-af24-04a1ad7a252f", 00:16:41.169 "is_configured": true, 00:16:41.169 "data_offset": 0, 00:16:41.169 "data_size": 65536 00:16:41.169 }, 00:16:41.169 { 00:16:41.169 "name": "BaseBdev3", 00:16:41.169 "uuid": "ed759d49-3516-449a-92dd-6e7885697234", 00:16:41.169 "is_configured": true, 00:16:41.169 "data_offset": 0, 00:16:41.169 "data_size": 65536 00:16:41.169 }, 00:16:41.169 { 00:16:41.169 "name": "BaseBdev4", 00:16:41.169 "uuid": "408b81c6-0c8e-4519-9eda-c4fe91587580", 00:16:41.169 "is_configured": true, 00:16:41.169 "data_offset": 0, 00:16:41.169 "data_size": 65536 00:16:41.169 } 00:16:41.169 ] 00:16:41.169 }' 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.169 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.429 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:41.429 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.688 [2024-11-20 17:51:08.645381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.688 [2024-11-20 17:51:08.645539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.688 [2024-11-20 17:51:08.744482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.688 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.688 [2024-11-20 17:51:08.804371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.947 17:51:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.947 [2024-11-20 17:51:08.964199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:41.947 [2024-11-20 17:51:08.964305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.947 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.208 BaseBdev2 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.208 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.208 [ 00:16:42.208 { 00:16:42.208 "name": "BaseBdev2", 00:16:42.208 "aliases": [ 00:16:42.208 "7564709e-7520-47c2-b9f1-31a989dbc68f" 00:16:42.208 ], 00:16:42.208 "product_name": "Malloc disk", 00:16:42.208 "block_size": 512, 00:16:42.208 "num_blocks": 65536, 00:16:42.208 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:42.208 "assigned_rate_limits": { 00:16:42.208 "rw_ios_per_sec": 0, 00:16:42.208 "rw_mbytes_per_sec": 0, 00:16:42.208 "r_mbytes_per_sec": 0, 00:16:42.208 "w_mbytes_per_sec": 0 00:16:42.208 }, 00:16:42.208 "claimed": false, 00:16:42.208 "zoned": false, 00:16:42.208 "supported_io_types": { 00:16:42.208 "read": true, 00:16:42.208 "write": true, 00:16:42.208 "unmap": true, 00:16:42.208 "flush": true, 00:16:42.208 "reset": true, 00:16:42.208 "nvme_admin": false, 00:16:42.208 "nvme_io": false, 00:16:42.208 "nvme_io_md": false, 00:16:42.208 "write_zeroes": true, 00:16:42.208 "zcopy": true, 00:16:42.208 "get_zone_info": false, 00:16:42.208 "zone_management": false, 00:16:42.208 "zone_append": false, 00:16:42.208 "compare": false, 00:16:42.208 "compare_and_write": false, 00:16:42.208 "abort": true, 00:16:42.208 "seek_hole": false, 00:16:42.208 "seek_data": false, 00:16:42.208 "copy": true, 00:16:42.208 "nvme_iov_md": false 00:16:42.208 }, 00:16:42.208 "memory_domains": [ 00:16:42.208 { 00:16:42.208 "dma_device_id": "system", 00:16:42.208 "dma_device_type": 1 00:16:42.208 }, 00:16:42.208 { 00:16:42.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.208 "dma_device_type": 2 00:16:42.208 } 00:16:42.208 ], 00:16:42.209 "driver_specific": {} 00:16:42.209 } 00:16:42.209 ] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 BaseBdev3 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 [ 00:16:42.209 { 00:16:42.209 "name": "BaseBdev3", 00:16:42.209 "aliases": [ 00:16:42.209 "655703dc-5ac3-4427-b5dd-e8dc7fb532e9" 00:16:42.209 ], 00:16:42.209 "product_name": "Malloc disk", 00:16:42.209 "block_size": 512, 00:16:42.209 "num_blocks": 65536, 00:16:42.209 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:42.209 "assigned_rate_limits": { 00:16:42.209 "rw_ios_per_sec": 0, 00:16:42.209 "rw_mbytes_per_sec": 0, 00:16:42.209 "r_mbytes_per_sec": 0, 00:16:42.209 "w_mbytes_per_sec": 0 00:16:42.209 }, 00:16:42.209 "claimed": false, 00:16:42.209 "zoned": false, 00:16:42.209 "supported_io_types": { 00:16:42.209 "read": true, 00:16:42.209 "write": true, 00:16:42.209 "unmap": true, 00:16:42.209 "flush": true, 00:16:42.209 "reset": true, 00:16:42.209 "nvme_admin": false, 00:16:42.209 "nvme_io": false, 00:16:42.209 "nvme_io_md": false, 00:16:42.209 "write_zeroes": true, 00:16:42.209 "zcopy": true, 00:16:42.209 "get_zone_info": false, 00:16:42.209 "zone_management": false, 00:16:42.209 "zone_append": false, 00:16:42.209 "compare": false, 00:16:42.209 "compare_and_write": false, 00:16:42.209 "abort": true, 00:16:42.209 "seek_hole": false, 00:16:42.209 "seek_data": false, 00:16:42.209 "copy": true, 00:16:42.209 "nvme_iov_md": false 00:16:42.209 }, 00:16:42.209 "memory_domains": [ 00:16:42.209 { 00:16:42.209 "dma_device_id": "system", 00:16:42.209 "dma_device_type": 1 00:16:42.209 }, 00:16:42.209 { 00:16:42.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.209 "dma_device_type": 2 00:16:42.209 } 00:16:42.209 ], 00:16:42.209 "driver_specific": {} 00:16:42.209 } 00:16:42.209 ] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 BaseBdev4 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 [ 00:16:42.209 { 00:16:42.209 "name": "BaseBdev4", 00:16:42.209 "aliases": [ 00:16:42.209 "007be04d-e162-41c8-99ac-d814499698fe" 00:16:42.209 ], 00:16:42.209 "product_name": "Malloc disk", 00:16:42.209 "block_size": 512, 00:16:42.209 "num_blocks": 65536, 00:16:42.209 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:42.209 "assigned_rate_limits": { 00:16:42.209 "rw_ios_per_sec": 0, 00:16:42.209 "rw_mbytes_per_sec": 0, 00:16:42.209 "r_mbytes_per_sec": 0, 00:16:42.209 "w_mbytes_per_sec": 0 00:16:42.209 }, 00:16:42.209 "claimed": false, 00:16:42.209 "zoned": false, 00:16:42.209 "supported_io_types": { 00:16:42.209 "read": true, 00:16:42.209 "write": true, 00:16:42.209 "unmap": true, 00:16:42.209 "flush": true, 00:16:42.209 "reset": true, 00:16:42.209 "nvme_admin": false, 00:16:42.209 "nvme_io": false, 00:16:42.209 "nvme_io_md": false, 00:16:42.209 "write_zeroes": true, 00:16:42.209 "zcopy": true, 00:16:42.209 "get_zone_info": false, 00:16:42.209 "zone_management": false, 00:16:42.209 "zone_append": false, 00:16:42.209 "compare": false, 00:16:42.209 "compare_and_write": false, 00:16:42.209 "abort": true, 00:16:42.209 "seek_hole": false, 00:16:42.209 "seek_data": false, 00:16:42.209 "copy": true, 00:16:42.209 "nvme_iov_md": false 00:16:42.209 }, 00:16:42.209 "memory_domains": [ 00:16:42.209 { 00:16:42.209 "dma_device_id": "system", 00:16:42.209 "dma_device_type": 1 00:16:42.209 }, 00:16:42.209 { 00:16:42.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.209 "dma_device_type": 2 00:16:42.209 } 00:16:42.209 ], 00:16:42.209 "driver_specific": {} 00:16:42.209 } 00:16:42.209 ] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:42.209 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:42.210 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:42.210 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.210 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.210 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.210 [2024-11-20 17:51:09.382044] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.210 [2024-11-20 17:51:09.382132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.210 [2024-11-20 17:51:09.382178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.470 [2024-11-20 17:51:09.384302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.470 [2024-11-20 17:51:09.384390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.470 "name": "Existed_Raid", 00:16:42.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.470 "strip_size_kb": 64, 00:16:42.470 "state": "configuring", 00:16:42.470 "raid_level": "raid5f", 00:16:42.470 "superblock": false, 00:16:42.470 "num_base_bdevs": 4, 00:16:42.470 "num_base_bdevs_discovered": 3, 00:16:42.470 "num_base_bdevs_operational": 4, 00:16:42.470 "base_bdevs_list": [ 00:16:42.470 { 00:16:42.470 "name": "BaseBdev1", 00:16:42.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.470 "is_configured": false, 00:16:42.470 "data_offset": 0, 00:16:42.470 "data_size": 0 00:16:42.470 }, 00:16:42.470 { 00:16:42.470 "name": "BaseBdev2", 00:16:42.470 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:42.470 "is_configured": true, 00:16:42.470 "data_offset": 0, 00:16:42.470 "data_size": 65536 00:16:42.470 }, 00:16:42.470 { 00:16:42.470 "name": "BaseBdev3", 00:16:42.470 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:42.470 "is_configured": true, 00:16:42.470 "data_offset": 0, 00:16:42.470 "data_size": 65536 00:16:42.470 }, 00:16:42.470 { 00:16:42.470 "name": "BaseBdev4", 00:16:42.470 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:42.470 "is_configured": true, 00:16:42.470 "data_offset": 0, 00:16:42.470 "data_size": 65536 00:16:42.470 } 00:16:42.470 ] 00:16:42.470 }' 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.470 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.730 [2024-11-20 17:51:09.789357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.730 "name": "Existed_Raid", 00:16:42.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.730 "strip_size_kb": 64, 00:16:42.730 "state": "configuring", 00:16:42.730 "raid_level": "raid5f", 00:16:42.730 "superblock": false, 00:16:42.730 "num_base_bdevs": 4, 00:16:42.730 "num_base_bdevs_discovered": 2, 00:16:42.730 "num_base_bdevs_operational": 4, 00:16:42.730 "base_bdevs_list": [ 00:16:42.730 { 00:16:42.730 "name": "BaseBdev1", 00:16:42.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.730 "is_configured": false, 00:16:42.730 "data_offset": 0, 00:16:42.730 "data_size": 0 00:16:42.730 }, 00:16:42.730 { 00:16:42.730 "name": null, 00:16:42.730 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:42.730 "is_configured": false, 00:16:42.730 "data_offset": 0, 00:16:42.730 "data_size": 65536 00:16:42.730 }, 00:16:42.730 { 00:16:42.730 "name": "BaseBdev3", 00:16:42.730 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:42.730 "is_configured": true, 00:16:42.730 "data_offset": 0, 00:16:42.730 "data_size": 65536 00:16:42.730 }, 00:16:42.730 { 00:16:42.730 "name": "BaseBdev4", 00:16:42.730 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:42.730 "is_configured": true, 00:16:42.730 "data_offset": 0, 00:16:42.730 "data_size": 65536 00:16:42.730 } 00:16:42.730 ] 00:16:42.730 }' 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.730 17:51:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.300 [2024-11-20 17:51:10.354182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.300 BaseBdev1 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.300 [ 00:16:43.300 { 00:16:43.300 "name": "BaseBdev1", 00:16:43.300 "aliases": [ 00:16:43.300 "c71455e3-5b3b-409b-885f-095e5cdb3924" 00:16:43.300 ], 00:16:43.300 "product_name": "Malloc disk", 00:16:43.300 "block_size": 512, 00:16:43.300 "num_blocks": 65536, 00:16:43.300 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:43.300 "assigned_rate_limits": { 00:16:43.300 "rw_ios_per_sec": 0, 00:16:43.300 "rw_mbytes_per_sec": 0, 00:16:43.300 "r_mbytes_per_sec": 0, 00:16:43.300 "w_mbytes_per_sec": 0 00:16:43.300 }, 00:16:43.300 "claimed": true, 00:16:43.300 "claim_type": "exclusive_write", 00:16:43.300 "zoned": false, 00:16:43.300 "supported_io_types": { 00:16:43.300 "read": true, 00:16:43.300 "write": true, 00:16:43.300 "unmap": true, 00:16:43.300 "flush": true, 00:16:43.300 "reset": true, 00:16:43.300 "nvme_admin": false, 00:16:43.300 "nvme_io": false, 00:16:43.300 "nvme_io_md": false, 00:16:43.300 "write_zeroes": true, 00:16:43.300 "zcopy": true, 00:16:43.300 "get_zone_info": false, 00:16:43.300 "zone_management": false, 00:16:43.300 "zone_append": false, 00:16:43.300 "compare": false, 00:16:43.300 "compare_and_write": false, 00:16:43.300 "abort": true, 00:16:43.300 "seek_hole": false, 00:16:43.300 "seek_data": false, 00:16:43.300 "copy": true, 00:16:43.300 "nvme_iov_md": false 00:16:43.300 }, 00:16:43.300 "memory_domains": [ 00:16:43.300 { 00:16:43.300 "dma_device_id": "system", 00:16:43.300 "dma_device_type": 1 00:16:43.300 }, 00:16:43.300 { 00:16:43.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.300 "dma_device_type": 2 00:16:43.300 } 00:16:43.300 ], 00:16:43.300 "driver_specific": {} 00:16:43.300 } 00:16:43.300 ] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.300 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.301 "name": "Existed_Raid", 00:16:43.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.301 "strip_size_kb": 64, 00:16:43.301 "state": "configuring", 00:16:43.301 "raid_level": "raid5f", 00:16:43.301 "superblock": false, 00:16:43.301 "num_base_bdevs": 4, 00:16:43.301 "num_base_bdevs_discovered": 3, 00:16:43.301 "num_base_bdevs_operational": 4, 00:16:43.301 "base_bdevs_list": [ 00:16:43.301 { 00:16:43.301 "name": "BaseBdev1", 00:16:43.301 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:43.301 "is_configured": true, 00:16:43.301 "data_offset": 0, 00:16:43.301 "data_size": 65536 00:16:43.301 }, 00:16:43.301 { 00:16:43.301 "name": null, 00:16:43.301 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:43.301 "is_configured": false, 00:16:43.301 "data_offset": 0, 00:16:43.301 "data_size": 65536 00:16:43.301 }, 00:16:43.301 { 00:16:43.301 "name": "BaseBdev3", 00:16:43.301 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:43.301 "is_configured": true, 00:16:43.301 "data_offset": 0, 00:16:43.301 "data_size": 65536 00:16:43.301 }, 00:16:43.301 { 00:16:43.301 "name": "BaseBdev4", 00:16:43.301 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:43.301 "is_configured": true, 00:16:43.301 "data_offset": 0, 00:16:43.301 "data_size": 65536 00:16:43.301 } 00:16:43.301 ] 00:16:43.301 }' 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.301 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.871 [2024-11-20 17:51:10.889377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.871 "name": "Existed_Raid", 00:16:43.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.871 "strip_size_kb": 64, 00:16:43.871 "state": "configuring", 00:16:43.871 "raid_level": "raid5f", 00:16:43.871 "superblock": false, 00:16:43.871 "num_base_bdevs": 4, 00:16:43.871 "num_base_bdevs_discovered": 2, 00:16:43.871 "num_base_bdevs_operational": 4, 00:16:43.871 "base_bdevs_list": [ 00:16:43.871 { 00:16:43.871 "name": "BaseBdev1", 00:16:43.871 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:43.871 "is_configured": true, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 }, 00:16:43.871 { 00:16:43.871 "name": null, 00:16:43.871 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:43.871 "is_configured": false, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 }, 00:16:43.871 { 00:16:43.871 "name": null, 00:16:43.871 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:43.871 "is_configured": false, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 }, 00:16:43.871 { 00:16:43.871 "name": "BaseBdev4", 00:16:43.871 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:43.871 "is_configured": true, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 } 00:16:43.871 ] 00:16:43.871 }' 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.871 17:51:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.131 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:44.131 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.131 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.131 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.131 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.393 [2024-11-20 17:51:11.316795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.393 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.393 "name": "Existed_Raid", 00:16:44.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.393 "strip_size_kb": 64, 00:16:44.393 "state": "configuring", 00:16:44.393 "raid_level": "raid5f", 00:16:44.393 "superblock": false, 00:16:44.393 "num_base_bdevs": 4, 00:16:44.393 "num_base_bdevs_discovered": 3, 00:16:44.393 "num_base_bdevs_operational": 4, 00:16:44.393 "base_bdevs_list": [ 00:16:44.393 { 00:16:44.393 "name": "BaseBdev1", 00:16:44.393 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:44.393 "is_configured": true, 00:16:44.393 "data_offset": 0, 00:16:44.393 "data_size": 65536 00:16:44.393 }, 00:16:44.393 { 00:16:44.393 "name": null, 00:16:44.393 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:44.393 "is_configured": false, 00:16:44.393 "data_offset": 0, 00:16:44.393 "data_size": 65536 00:16:44.393 }, 00:16:44.393 { 00:16:44.393 "name": "BaseBdev3", 00:16:44.393 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:44.393 "is_configured": true, 00:16:44.393 "data_offset": 0, 00:16:44.393 "data_size": 65536 00:16:44.393 }, 00:16:44.393 { 00:16:44.393 "name": "BaseBdev4", 00:16:44.394 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:44.394 "is_configured": true, 00:16:44.394 "data_offset": 0, 00:16:44.394 "data_size": 65536 00:16:44.394 } 00:16:44.394 ] 00:16:44.394 }' 00:16:44.394 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.394 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.655 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.655 [2024-11-20 17:51:11.800005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.915 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.915 "name": "Existed_Raid", 00:16:44.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.916 "strip_size_kb": 64, 00:16:44.916 "state": "configuring", 00:16:44.916 "raid_level": "raid5f", 00:16:44.916 "superblock": false, 00:16:44.916 "num_base_bdevs": 4, 00:16:44.916 "num_base_bdevs_discovered": 2, 00:16:44.916 "num_base_bdevs_operational": 4, 00:16:44.916 "base_bdevs_list": [ 00:16:44.916 { 00:16:44.916 "name": null, 00:16:44.916 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:44.916 "is_configured": false, 00:16:44.916 "data_offset": 0, 00:16:44.916 "data_size": 65536 00:16:44.916 }, 00:16:44.916 { 00:16:44.916 "name": null, 00:16:44.916 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:44.916 "is_configured": false, 00:16:44.916 "data_offset": 0, 00:16:44.916 "data_size": 65536 00:16:44.916 }, 00:16:44.916 { 00:16:44.916 "name": "BaseBdev3", 00:16:44.916 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:44.916 "is_configured": true, 00:16:44.916 "data_offset": 0, 00:16:44.916 "data_size": 65536 00:16:44.916 }, 00:16:44.916 { 00:16:44.916 "name": "BaseBdev4", 00:16:44.916 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:44.916 "is_configured": true, 00:16:44.916 "data_offset": 0, 00:16:44.916 "data_size": 65536 00:16:44.916 } 00:16:44.916 ] 00:16:44.916 }' 00:16:44.916 17:51:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.916 17:51:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.175 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.175 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.175 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.175 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.175 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 [2024-11-20 17:51:12.358319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.435 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.436 "name": "Existed_Raid", 00:16:45.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.436 "strip_size_kb": 64, 00:16:45.436 "state": "configuring", 00:16:45.436 "raid_level": "raid5f", 00:16:45.436 "superblock": false, 00:16:45.436 "num_base_bdevs": 4, 00:16:45.436 "num_base_bdevs_discovered": 3, 00:16:45.436 "num_base_bdevs_operational": 4, 00:16:45.436 "base_bdevs_list": [ 00:16:45.436 { 00:16:45.436 "name": null, 00:16:45.436 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:45.436 "is_configured": false, 00:16:45.436 "data_offset": 0, 00:16:45.436 "data_size": 65536 00:16:45.436 }, 00:16:45.436 { 00:16:45.436 "name": "BaseBdev2", 00:16:45.436 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:45.436 "is_configured": true, 00:16:45.436 "data_offset": 0, 00:16:45.436 "data_size": 65536 00:16:45.436 }, 00:16:45.436 { 00:16:45.436 "name": "BaseBdev3", 00:16:45.436 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:45.436 "is_configured": true, 00:16:45.436 "data_offset": 0, 00:16:45.436 "data_size": 65536 00:16:45.436 }, 00:16:45.436 { 00:16:45.436 "name": "BaseBdev4", 00:16:45.436 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:45.436 "is_configured": true, 00:16:45.436 "data_offset": 0, 00:16:45.436 "data_size": 65536 00:16:45.436 } 00:16:45.436 ] 00:16:45.436 }' 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.436 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.697 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.697 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:45.697 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.697 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c71455e3-5b3b-409b-885f-095e5cdb3924 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.958 [2024-11-20 17:51:12.958369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:45.958 [2024-11-20 17:51:12.958498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:45.958 [2024-11-20 17:51:12.958523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:45.958 [2024-11-20 17:51:12.958854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:45.958 [2024-11-20 17:51:12.965149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:45.958 [2024-11-20 17:51:12.965221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:45.958 [2024-11-20 17:51:12.965537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.958 NewBaseBdev 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.958 [ 00:16:45.958 { 00:16:45.958 "name": "NewBaseBdev", 00:16:45.958 "aliases": [ 00:16:45.958 "c71455e3-5b3b-409b-885f-095e5cdb3924" 00:16:45.958 ], 00:16:45.958 "product_name": "Malloc disk", 00:16:45.958 "block_size": 512, 00:16:45.958 "num_blocks": 65536, 00:16:45.958 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:45.958 "assigned_rate_limits": { 00:16:45.958 "rw_ios_per_sec": 0, 00:16:45.958 "rw_mbytes_per_sec": 0, 00:16:45.958 "r_mbytes_per_sec": 0, 00:16:45.958 "w_mbytes_per_sec": 0 00:16:45.958 }, 00:16:45.958 "claimed": true, 00:16:45.958 "claim_type": "exclusive_write", 00:16:45.958 "zoned": false, 00:16:45.958 "supported_io_types": { 00:16:45.958 "read": true, 00:16:45.958 "write": true, 00:16:45.958 "unmap": true, 00:16:45.958 "flush": true, 00:16:45.958 "reset": true, 00:16:45.958 "nvme_admin": false, 00:16:45.958 "nvme_io": false, 00:16:45.958 "nvme_io_md": false, 00:16:45.958 "write_zeroes": true, 00:16:45.958 "zcopy": true, 00:16:45.958 "get_zone_info": false, 00:16:45.958 "zone_management": false, 00:16:45.958 "zone_append": false, 00:16:45.958 "compare": false, 00:16:45.958 "compare_and_write": false, 00:16:45.958 "abort": true, 00:16:45.958 "seek_hole": false, 00:16:45.958 "seek_data": false, 00:16:45.958 "copy": true, 00:16:45.958 "nvme_iov_md": false 00:16:45.958 }, 00:16:45.958 "memory_domains": [ 00:16:45.958 { 00:16:45.958 "dma_device_id": "system", 00:16:45.958 "dma_device_type": 1 00:16:45.958 }, 00:16:45.958 { 00:16:45.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.958 "dma_device_type": 2 00:16:45.958 } 00:16:45.958 ], 00:16:45.958 "driver_specific": {} 00:16:45.958 } 00:16:45.958 ] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.958 17:51:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.958 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.958 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.958 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.958 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.958 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.958 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.958 "name": "Existed_Raid", 00:16:45.958 "uuid": "0def8821-2511-40ea-9022-16a1ebaef348", 00:16:45.958 "strip_size_kb": 64, 00:16:45.958 "state": "online", 00:16:45.958 "raid_level": "raid5f", 00:16:45.958 "superblock": false, 00:16:45.958 "num_base_bdevs": 4, 00:16:45.958 "num_base_bdevs_discovered": 4, 00:16:45.958 "num_base_bdevs_operational": 4, 00:16:45.958 "base_bdevs_list": [ 00:16:45.958 { 00:16:45.958 "name": "NewBaseBdev", 00:16:45.958 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:45.958 "is_configured": true, 00:16:45.958 "data_offset": 0, 00:16:45.958 "data_size": 65536 00:16:45.958 }, 00:16:45.958 { 00:16:45.958 "name": "BaseBdev2", 00:16:45.958 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:45.958 "is_configured": true, 00:16:45.959 "data_offset": 0, 00:16:45.959 "data_size": 65536 00:16:45.959 }, 00:16:45.959 { 00:16:45.959 "name": "BaseBdev3", 00:16:45.959 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:45.959 "is_configured": true, 00:16:45.959 "data_offset": 0, 00:16:45.959 "data_size": 65536 00:16:45.959 }, 00:16:45.959 { 00:16:45.959 "name": "BaseBdev4", 00:16:45.959 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:45.959 "is_configured": true, 00:16:45.959 "data_offset": 0, 00:16:45.959 "data_size": 65536 00:16:45.959 } 00:16:45.959 ] 00:16:45.959 }' 00:16:45.959 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.959 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.528 [2024-11-20 17:51:13.465731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:46.528 "name": "Existed_Raid", 00:16:46.528 "aliases": [ 00:16:46.528 "0def8821-2511-40ea-9022-16a1ebaef348" 00:16:46.528 ], 00:16:46.528 "product_name": "Raid Volume", 00:16:46.528 "block_size": 512, 00:16:46.528 "num_blocks": 196608, 00:16:46.528 "uuid": "0def8821-2511-40ea-9022-16a1ebaef348", 00:16:46.528 "assigned_rate_limits": { 00:16:46.528 "rw_ios_per_sec": 0, 00:16:46.528 "rw_mbytes_per_sec": 0, 00:16:46.528 "r_mbytes_per_sec": 0, 00:16:46.528 "w_mbytes_per_sec": 0 00:16:46.528 }, 00:16:46.528 "claimed": false, 00:16:46.528 "zoned": false, 00:16:46.528 "supported_io_types": { 00:16:46.528 "read": true, 00:16:46.528 "write": true, 00:16:46.528 "unmap": false, 00:16:46.528 "flush": false, 00:16:46.528 "reset": true, 00:16:46.528 "nvme_admin": false, 00:16:46.528 "nvme_io": false, 00:16:46.528 "nvme_io_md": false, 00:16:46.528 "write_zeroes": true, 00:16:46.528 "zcopy": false, 00:16:46.528 "get_zone_info": false, 00:16:46.528 "zone_management": false, 00:16:46.528 "zone_append": false, 00:16:46.528 "compare": false, 00:16:46.528 "compare_and_write": false, 00:16:46.528 "abort": false, 00:16:46.528 "seek_hole": false, 00:16:46.528 "seek_data": false, 00:16:46.528 "copy": false, 00:16:46.528 "nvme_iov_md": false 00:16:46.528 }, 00:16:46.528 "driver_specific": { 00:16:46.528 "raid": { 00:16:46.528 "uuid": "0def8821-2511-40ea-9022-16a1ebaef348", 00:16:46.528 "strip_size_kb": 64, 00:16:46.528 "state": "online", 00:16:46.528 "raid_level": "raid5f", 00:16:46.528 "superblock": false, 00:16:46.528 "num_base_bdevs": 4, 00:16:46.528 "num_base_bdevs_discovered": 4, 00:16:46.528 "num_base_bdevs_operational": 4, 00:16:46.528 "base_bdevs_list": [ 00:16:46.528 { 00:16:46.528 "name": "NewBaseBdev", 00:16:46.528 "uuid": "c71455e3-5b3b-409b-885f-095e5cdb3924", 00:16:46.528 "is_configured": true, 00:16:46.528 "data_offset": 0, 00:16:46.528 "data_size": 65536 00:16:46.528 }, 00:16:46.528 { 00:16:46.528 "name": "BaseBdev2", 00:16:46.528 "uuid": "7564709e-7520-47c2-b9f1-31a989dbc68f", 00:16:46.528 "is_configured": true, 00:16:46.528 "data_offset": 0, 00:16:46.528 "data_size": 65536 00:16:46.528 }, 00:16:46.528 { 00:16:46.528 "name": "BaseBdev3", 00:16:46.528 "uuid": "655703dc-5ac3-4427-b5dd-e8dc7fb532e9", 00:16:46.528 "is_configured": true, 00:16:46.528 "data_offset": 0, 00:16:46.528 "data_size": 65536 00:16:46.528 }, 00:16:46.528 { 00:16:46.528 "name": "BaseBdev4", 00:16:46.528 "uuid": "007be04d-e162-41c8-99ac-d814499698fe", 00:16:46.528 "is_configured": true, 00:16:46.528 "data_offset": 0, 00:16:46.528 "data_size": 65536 00:16:46.528 } 00:16:46.528 ] 00:16:46.528 } 00:16:46.528 } 00:16:46.528 }' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:46.528 BaseBdev2 00:16:46.528 BaseBdev3 00:16:46.528 BaseBdev4' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.528 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.788 [2024-11-20 17:51:13.792998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.788 [2024-11-20 17:51:13.793041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.788 [2024-11-20 17:51:13.793136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.788 [2024-11-20 17:51:13.793465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.788 [2024-11-20 17:51:13.793476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83248 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83248 ']' 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83248 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83248 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83248' 00:16:46.788 killing process with pid 83248 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83248 00:16:46.788 [2024-11-20 17:51:13.828319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.788 17:51:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83248 00:16:47.357 [2024-11-20 17:51:14.247547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.297 17:51:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:48.297 00:16:48.297 real 0m11.620s 00:16:48.297 user 0m18.068s 00:16:48.297 sys 0m2.320s 00:16:48.297 ************************************ 00:16:48.297 END TEST raid5f_state_function_test 00:16:48.297 ************************************ 00:16:48.297 17:51:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.297 17:51:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.557 17:51:15 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:48.557 17:51:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:48.557 17:51:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.557 17:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.557 ************************************ 00:16:48.557 START TEST raid5f_state_function_test_sb 00:16:48.557 ************************************ 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83914 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83914' 00:16:48.557 Process raid pid: 83914 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83914 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83914 ']' 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.557 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.558 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.558 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.558 17:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.558 [2024-11-20 17:51:15.624539] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:48.558 [2024-11-20 17:51:15.624659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.818 [2024-11-20 17:51:15.805054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.818 [2024-11-20 17:51:15.938646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.077 [2024-11-20 17:51:16.177990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.077 [2024-11-20 17:51:16.178036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.337 [2024-11-20 17:51:16.439532] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.337 [2024-11-20 17:51:16.439598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.337 [2024-11-20 17:51:16.439608] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.337 [2024-11-20 17:51:16.439618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.337 [2024-11-20 17:51:16.439624] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.337 [2024-11-20 17:51:16.439633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.337 [2024-11-20 17:51:16.439639] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:49.337 [2024-11-20 17:51:16.439648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.337 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.338 "name": "Existed_Raid", 00:16:49.338 "uuid": "0001ebda-5dd1-4f9e-839e-357a771f6f9f", 00:16:49.338 "strip_size_kb": 64, 00:16:49.338 "state": "configuring", 00:16:49.338 "raid_level": "raid5f", 00:16:49.338 "superblock": true, 00:16:49.338 "num_base_bdevs": 4, 00:16:49.338 "num_base_bdevs_discovered": 0, 00:16:49.338 "num_base_bdevs_operational": 4, 00:16:49.338 "base_bdevs_list": [ 00:16:49.338 { 00:16:49.338 "name": "BaseBdev1", 00:16:49.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.338 "is_configured": false, 00:16:49.338 "data_offset": 0, 00:16:49.338 "data_size": 0 00:16:49.338 }, 00:16:49.338 { 00:16:49.338 "name": "BaseBdev2", 00:16:49.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.338 "is_configured": false, 00:16:49.338 "data_offset": 0, 00:16:49.338 "data_size": 0 00:16:49.338 }, 00:16:49.338 { 00:16:49.338 "name": "BaseBdev3", 00:16:49.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.338 "is_configured": false, 00:16:49.338 "data_offset": 0, 00:16:49.338 "data_size": 0 00:16:49.338 }, 00:16:49.338 { 00:16:49.338 "name": "BaseBdev4", 00:16:49.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.338 "is_configured": false, 00:16:49.338 "data_offset": 0, 00:16:49.338 "data_size": 0 00:16:49.338 } 00:16:49.338 ] 00:16:49.338 }' 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.338 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.908 [2024-11-20 17:51:16.882723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.908 [2024-11-20 17:51:16.882828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.908 [2024-11-20 17:51:16.894677] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.908 [2024-11-20 17:51:16.894755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.908 [2024-11-20 17:51:16.894782] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.908 [2024-11-20 17:51:16.894804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.908 [2024-11-20 17:51:16.894820] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.908 [2024-11-20 17:51:16.894839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.908 [2024-11-20 17:51:16.894855] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:49.908 [2024-11-20 17:51:16.894875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.908 [2024-11-20 17:51:16.948103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.908 BaseBdev1 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.908 [ 00:16:49.908 { 00:16:49.908 "name": "BaseBdev1", 00:16:49.908 "aliases": [ 00:16:49.908 "d90994ee-b9f7-44e8-90d3-3a35368b76c1" 00:16:49.908 ], 00:16:49.908 "product_name": "Malloc disk", 00:16:49.908 "block_size": 512, 00:16:49.908 "num_blocks": 65536, 00:16:49.908 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:49.908 "assigned_rate_limits": { 00:16:49.908 "rw_ios_per_sec": 0, 00:16:49.908 "rw_mbytes_per_sec": 0, 00:16:49.908 "r_mbytes_per_sec": 0, 00:16:49.908 "w_mbytes_per_sec": 0 00:16:49.908 }, 00:16:49.908 "claimed": true, 00:16:49.908 "claim_type": "exclusive_write", 00:16:49.908 "zoned": false, 00:16:49.908 "supported_io_types": { 00:16:49.908 "read": true, 00:16:49.908 "write": true, 00:16:49.908 "unmap": true, 00:16:49.908 "flush": true, 00:16:49.908 "reset": true, 00:16:49.908 "nvme_admin": false, 00:16:49.908 "nvme_io": false, 00:16:49.908 "nvme_io_md": false, 00:16:49.908 "write_zeroes": true, 00:16:49.908 "zcopy": true, 00:16:49.908 "get_zone_info": false, 00:16:49.908 "zone_management": false, 00:16:49.908 "zone_append": false, 00:16:49.908 "compare": false, 00:16:49.908 "compare_and_write": false, 00:16:49.908 "abort": true, 00:16:49.908 "seek_hole": false, 00:16:49.908 "seek_data": false, 00:16:49.908 "copy": true, 00:16:49.908 "nvme_iov_md": false 00:16:49.908 }, 00:16:49.908 "memory_domains": [ 00:16:49.908 { 00:16:49.908 "dma_device_id": "system", 00:16:49.908 "dma_device_type": 1 00:16:49.908 }, 00:16:49.908 { 00:16:49.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.908 "dma_device_type": 2 00:16:49.908 } 00:16:49.908 ], 00:16:49.908 "driver_specific": {} 00:16:49.908 } 00:16:49.908 ] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.908 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.909 17:51:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.909 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.909 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.909 "name": "Existed_Raid", 00:16:49.909 "uuid": "a424907b-0196-440f-8d52-dcbc62871eee", 00:16:49.909 "strip_size_kb": 64, 00:16:49.909 "state": "configuring", 00:16:49.909 "raid_level": "raid5f", 00:16:49.909 "superblock": true, 00:16:49.909 "num_base_bdevs": 4, 00:16:49.909 "num_base_bdevs_discovered": 1, 00:16:49.909 "num_base_bdevs_operational": 4, 00:16:49.909 "base_bdevs_list": [ 00:16:49.909 { 00:16:49.909 "name": "BaseBdev1", 00:16:49.909 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:49.909 "is_configured": true, 00:16:49.909 "data_offset": 2048, 00:16:49.909 "data_size": 63488 00:16:49.909 }, 00:16:49.909 { 00:16:49.909 "name": "BaseBdev2", 00:16:49.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.909 "is_configured": false, 00:16:49.909 "data_offset": 0, 00:16:49.909 "data_size": 0 00:16:49.909 }, 00:16:49.909 { 00:16:49.909 "name": "BaseBdev3", 00:16:49.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.909 "is_configured": false, 00:16:49.909 "data_offset": 0, 00:16:49.909 "data_size": 0 00:16:49.909 }, 00:16:49.909 { 00:16:49.909 "name": "BaseBdev4", 00:16:49.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.909 "is_configured": false, 00:16:49.909 "data_offset": 0, 00:16:49.909 "data_size": 0 00:16:49.909 } 00:16:49.909 ] 00:16:49.909 }' 00:16:49.909 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.909 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 [2024-11-20 17:51:17.375443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.479 [2024-11-20 17:51:17.375559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 [2024-11-20 17:51:17.387439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.479 [2024-11-20 17:51:17.389635] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.479 [2024-11-20 17:51:17.389709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.479 [2024-11-20 17:51:17.389737] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.479 [2024-11-20 17:51:17.389760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.479 [2024-11-20 17:51:17.389777] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:50.479 [2024-11-20 17:51:17.389795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.479 "name": "Existed_Raid", 00:16:50.479 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:50.479 "strip_size_kb": 64, 00:16:50.479 "state": "configuring", 00:16:50.479 "raid_level": "raid5f", 00:16:50.479 "superblock": true, 00:16:50.479 "num_base_bdevs": 4, 00:16:50.479 "num_base_bdevs_discovered": 1, 00:16:50.479 "num_base_bdevs_operational": 4, 00:16:50.479 "base_bdevs_list": [ 00:16:50.479 { 00:16:50.479 "name": "BaseBdev1", 00:16:50.479 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:50.479 "is_configured": true, 00:16:50.479 "data_offset": 2048, 00:16:50.479 "data_size": 63488 00:16:50.479 }, 00:16:50.479 { 00:16:50.479 "name": "BaseBdev2", 00:16:50.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.479 "is_configured": false, 00:16:50.479 "data_offset": 0, 00:16:50.479 "data_size": 0 00:16:50.479 }, 00:16:50.479 { 00:16:50.479 "name": "BaseBdev3", 00:16:50.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.479 "is_configured": false, 00:16:50.479 "data_offset": 0, 00:16:50.479 "data_size": 0 00:16:50.479 }, 00:16:50.479 { 00:16:50.479 "name": "BaseBdev4", 00:16:50.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.479 "is_configured": false, 00:16:50.479 "data_offset": 0, 00:16:50.479 "data_size": 0 00:16:50.479 } 00:16:50.479 ] 00:16:50.479 }' 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.479 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 [2024-11-20 17:51:17.850706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.739 BaseBdev2 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 [ 00:16:50.739 { 00:16:50.739 "name": "BaseBdev2", 00:16:50.739 "aliases": [ 00:16:50.739 "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10" 00:16:50.739 ], 00:16:50.739 "product_name": "Malloc disk", 00:16:50.739 "block_size": 512, 00:16:50.739 "num_blocks": 65536, 00:16:50.739 "uuid": "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10", 00:16:50.739 "assigned_rate_limits": { 00:16:50.739 "rw_ios_per_sec": 0, 00:16:50.739 "rw_mbytes_per_sec": 0, 00:16:50.739 "r_mbytes_per_sec": 0, 00:16:50.739 "w_mbytes_per_sec": 0 00:16:50.739 }, 00:16:50.739 "claimed": true, 00:16:50.739 "claim_type": "exclusive_write", 00:16:50.739 "zoned": false, 00:16:50.739 "supported_io_types": { 00:16:50.739 "read": true, 00:16:50.739 "write": true, 00:16:50.739 "unmap": true, 00:16:50.739 "flush": true, 00:16:50.739 "reset": true, 00:16:50.739 "nvme_admin": false, 00:16:50.739 "nvme_io": false, 00:16:50.739 "nvme_io_md": false, 00:16:50.739 "write_zeroes": true, 00:16:50.739 "zcopy": true, 00:16:50.739 "get_zone_info": false, 00:16:50.739 "zone_management": false, 00:16:50.739 "zone_append": false, 00:16:50.739 "compare": false, 00:16:50.739 "compare_and_write": false, 00:16:50.739 "abort": true, 00:16:50.739 "seek_hole": false, 00:16:50.739 "seek_data": false, 00:16:50.739 "copy": true, 00:16:50.739 "nvme_iov_md": false 00:16:50.739 }, 00:16:50.739 "memory_domains": [ 00:16:50.739 { 00:16:50.739 "dma_device_id": "system", 00:16:50.739 "dma_device_type": 1 00:16:50.739 }, 00:16:50.739 { 00:16:50.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.739 "dma_device_type": 2 00:16:50.739 } 00:16:50.739 ], 00:16:50.739 "driver_specific": {} 00:16:50.739 } 00:16:50.739 ] 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.739 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.740 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.000 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.000 "name": "Existed_Raid", 00:16:51.000 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:51.000 "strip_size_kb": 64, 00:16:51.000 "state": "configuring", 00:16:51.000 "raid_level": "raid5f", 00:16:51.000 "superblock": true, 00:16:51.000 "num_base_bdevs": 4, 00:16:51.000 "num_base_bdevs_discovered": 2, 00:16:51.000 "num_base_bdevs_operational": 4, 00:16:51.000 "base_bdevs_list": [ 00:16:51.000 { 00:16:51.000 "name": "BaseBdev1", 00:16:51.000 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:51.000 "is_configured": true, 00:16:51.000 "data_offset": 2048, 00:16:51.000 "data_size": 63488 00:16:51.000 }, 00:16:51.000 { 00:16:51.000 "name": "BaseBdev2", 00:16:51.000 "uuid": "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10", 00:16:51.000 "is_configured": true, 00:16:51.000 "data_offset": 2048, 00:16:51.000 "data_size": 63488 00:16:51.000 }, 00:16:51.000 { 00:16:51.000 "name": "BaseBdev3", 00:16:51.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.000 "is_configured": false, 00:16:51.000 "data_offset": 0, 00:16:51.000 "data_size": 0 00:16:51.000 }, 00:16:51.000 { 00:16:51.000 "name": "BaseBdev4", 00:16:51.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.000 "is_configured": false, 00:16:51.000 "data_offset": 0, 00:16:51.000 "data_size": 0 00:16:51.000 } 00:16:51.000 ] 00:16:51.000 }' 00:16:51.000 17:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.000 17:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.260 [2024-11-20 17:51:18.325161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.260 BaseBdev3 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.260 [ 00:16:51.260 { 00:16:51.260 "name": "BaseBdev3", 00:16:51.260 "aliases": [ 00:16:51.260 "76e00026-70cd-42bd-a479-726235f3b3d8" 00:16:51.260 ], 00:16:51.260 "product_name": "Malloc disk", 00:16:51.260 "block_size": 512, 00:16:51.260 "num_blocks": 65536, 00:16:51.260 "uuid": "76e00026-70cd-42bd-a479-726235f3b3d8", 00:16:51.260 "assigned_rate_limits": { 00:16:51.260 "rw_ios_per_sec": 0, 00:16:51.260 "rw_mbytes_per_sec": 0, 00:16:51.260 "r_mbytes_per_sec": 0, 00:16:51.260 "w_mbytes_per_sec": 0 00:16:51.260 }, 00:16:51.260 "claimed": true, 00:16:51.260 "claim_type": "exclusive_write", 00:16:51.260 "zoned": false, 00:16:51.260 "supported_io_types": { 00:16:51.260 "read": true, 00:16:51.260 "write": true, 00:16:51.260 "unmap": true, 00:16:51.260 "flush": true, 00:16:51.260 "reset": true, 00:16:51.260 "nvme_admin": false, 00:16:51.260 "nvme_io": false, 00:16:51.260 "nvme_io_md": false, 00:16:51.260 "write_zeroes": true, 00:16:51.260 "zcopy": true, 00:16:51.260 "get_zone_info": false, 00:16:51.260 "zone_management": false, 00:16:51.260 "zone_append": false, 00:16:51.260 "compare": false, 00:16:51.260 "compare_and_write": false, 00:16:51.260 "abort": true, 00:16:51.260 "seek_hole": false, 00:16:51.260 "seek_data": false, 00:16:51.260 "copy": true, 00:16:51.260 "nvme_iov_md": false 00:16:51.260 }, 00:16:51.260 "memory_domains": [ 00:16:51.260 { 00:16:51.260 "dma_device_id": "system", 00:16:51.260 "dma_device_type": 1 00:16:51.260 }, 00:16:51.260 { 00:16:51.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.260 "dma_device_type": 2 00:16:51.260 } 00:16:51.260 ], 00:16:51.260 "driver_specific": {} 00:16:51.260 } 00:16:51.260 ] 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.260 "name": "Existed_Raid", 00:16:51.260 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:51.260 "strip_size_kb": 64, 00:16:51.260 "state": "configuring", 00:16:51.260 "raid_level": "raid5f", 00:16:51.260 "superblock": true, 00:16:51.260 "num_base_bdevs": 4, 00:16:51.260 "num_base_bdevs_discovered": 3, 00:16:51.260 "num_base_bdevs_operational": 4, 00:16:51.260 "base_bdevs_list": [ 00:16:51.260 { 00:16:51.260 "name": "BaseBdev1", 00:16:51.260 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:51.260 "is_configured": true, 00:16:51.260 "data_offset": 2048, 00:16:51.260 "data_size": 63488 00:16:51.260 }, 00:16:51.260 { 00:16:51.260 "name": "BaseBdev2", 00:16:51.260 "uuid": "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10", 00:16:51.260 "is_configured": true, 00:16:51.260 "data_offset": 2048, 00:16:51.260 "data_size": 63488 00:16:51.260 }, 00:16:51.260 { 00:16:51.260 "name": "BaseBdev3", 00:16:51.260 "uuid": "76e00026-70cd-42bd-a479-726235f3b3d8", 00:16:51.260 "is_configured": true, 00:16:51.260 "data_offset": 2048, 00:16:51.260 "data_size": 63488 00:16:51.260 }, 00:16:51.260 { 00:16:51.260 "name": "BaseBdev4", 00:16:51.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.260 "is_configured": false, 00:16:51.260 "data_offset": 0, 00:16:51.260 "data_size": 0 00:16:51.260 } 00:16:51.260 ] 00:16:51.260 }' 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.260 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.829 [2024-11-20 17:51:18.826097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.829 [2024-11-20 17:51:18.826522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:51.829 [2024-11-20 17:51:18.826542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:51.829 [2024-11-20 17:51:18.826840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:51.829 BaseBdev4 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.829 [2024-11-20 17:51:18.834323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:51.829 [2024-11-20 17:51:18.834346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:51.829 [2024-11-20 17:51:18.834612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.829 [ 00:16:51.829 { 00:16:51.829 "name": "BaseBdev4", 00:16:51.829 "aliases": [ 00:16:51.829 "dcf2f26a-af25-4e39-bb3d-b73dde999c0b" 00:16:51.829 ], 00:16:51.829 "product_name": "Malloc disk", 00:16:51.829 "block_size": 512, 00:16:51.829 "num_blocks": 65536, 00:16:51.829 "uuid": "dcf2f26a-af25-4e39-bb3d-b73dde999c0b", 00:16:51.829 "assigned_rate_limits": { 00:16:51.829 "rw_ios_per_sec": 0, 00:16:51.829 "rw_mbytes_per_sec": 0, 00:16:51.829 "r_mbytes_per_sec": 0, 00:16:51.829 "w_mbytes_per_sec": 0 00:16:51.829 }, 00:16:51.829 "claimed": true, 00:16:51.829 "claim_type": "exclusive_write", 00:16:51.829 "zoned": false, 00:16:51.829 "supported_io_types": { 00:16:51.829 "read": true, 00:16:51.829 "write": true, 00:16:51.829 "unmap": true, 00:16:51.829 "flush": true, 00:16:51.829 "reset": true, 00:16:51.829 "nvme_admin": false, 00:16:51.829 "nvme_io": false, 00:16:51.829 "nvme_io_md": false, 00:16:51.829 "write_zeroes": true, 00:16:51.829 "zcopy": true, 00:16:51.829 "get_zone_info": false, 00:16:51.829 "zone_management": false, 00:16:51.829 "zone_append": false, 00:16:51.829 "compare": false, 00:16:51.829 "compare_and_write": false, 00:16:51.829 "abort": true, 00:16:51.829 "seek_hole": false, 00:16:51.829 "seek_data": false, 00:16:51.829 "copy": true, 00:16:51.829 "nvme_iov_md": false 00:16:51.829 }, 00:16:51.829 "memory_domains": [ 00:16:51.829 { 00:16:51.829 "dma_device_id": "system", 00:16:51.829 "dma_device_type": 1 00:16:51.829 }, 00:16:51.829 { 00:16:51.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.829 "dma_device_type": 2 00:16:51.829 } 00:16:51.829 ], 00:16:51.829 "driver_specific": {} 00:16:51.829 } 00:16:51.829 ] 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.829 "name": "Existed_Raid", 00:16:51.829 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:51.829 "strip_size_kb": 64, 00:16:51.829 "state": "online", 00:16:51.829 "raid_level": "raid5f", 00:16:51.829 "superblock": true, 00:16:51.829 "num_base_bdevs": 4, 00:16:51.829 "num_base_bdevs_discovered": 4, 00:16:51.829 "num_base_bdevs_operational": 4, 00:16:51.829 "base_bdevs_list": [ 00:16:51.829 { 00:16:51.829 "name": "BaseBdev1", 00:16:51.829 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:51.829 "is_configured": true, 00:16:51.829 "data_offset": 2048, 00:16:51.829 "data_size": 63488 00:16:51.829 }, 00:16:51.829 { 00:16:51.829 "name": "BaseBdev2", 00:16:51.829 "uuid": "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10", 00:16:51.829 "is_configured": true, 00:16:51.829 "data_offset": 2048, 00:16:51.829 "data_size": 63488 00:16:51.829 }, 00:16:51.829 { 00:16:51.829 "name": "BaseBdev3", 00:16:51.829 "uuid": "76e00026-70cd-42bd-a479-726235f3b3d8", 00:16:51.829 "is_configured": true, 00:16:51.829 "data_offset": 2048, 00:16:51.829 "data_size": 63488 00:16:51.829 }, 00:16:51.829 { 00:16:51.829 "name": "BaseBdev4", 00:16:51.829 "uuid": "dcf2f26a-af25-4e39-bb3d-b73dde999c0b", 00:16:51.829 "is_configured": true, 00:16:51.829 "data_offset": 2048, 00:16:51.829 "data_size": 63488 00:16:51.829 } 00:16:51.829 ] 00:16:51.829 }' 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.829 17:51:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.399 [2024-11-20 17:51:19.278912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.399 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.399 "name": "Existed_Raid", 00:16:52.399 "aliases": [ 00:16:52.399 "d78ae318-3d69-4569-b805-8f0e18f3d288" 00:16:52.399 ], 00:16:52.399 "product_name": "Raid Volume", 00:16:52.399 "block_size": 512, 00:16:52.399 "num_blocks": 190464, 00:16:52.399 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:52.399 "assigned_rate_limits": { 00:16:52.399 "rw_ios_per_sec": 0, 00:16:52.399 "rw_mbytes_per_sec": 0, 00:16:52.399 "r_mbytes_per_sec": 0, 00:16:52.399 "w_mbytes_per_sec": 0 00:16:52.399 }, 00:16:52.399 "claimed": false, 00:16:52.399 "zoned": false, 00:16:52.399 "supported_io_types": { 00:16:52.399 "read": true, 00:16:52.399 "write": true, 00:16:52.399 "unmap": false, 00:16:52.399 "flush": false, 00:16:52.399 "reset": true, 00:16:52.399 "nvme_admin": false, 00:16:52.399 "nvme_io": false, 00:16:52.399 "nvme_io_md": false, 00:16:52.399 "write_zeroes": true, 00:16:52.399 "zcopy": false, 00:16:52.399 "get_zone_info": false, 00:16:52.399 "zone_management": false, 00:16:52.399 "zone_append": false, 00:16:52.399 "compare": false, 00:16:52.399 "compare_and_write": false, 00:16:52.399 "abort": false, 00:16:52.399 "seek_hole": false, 00:16:52.399 "seek_data": false, 00:16:52.399 "copy": false, 00:16:52.399 "nvme_iov_md": false 00:16:52.399 }, 00:16:52.399 "driver_specific": { 00:16:52.399 "raid": { 00:16:52.399 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:52.399 "strip_size_kb": 64, 00:16:52.399 "state": "online", 00:16:52.399 "raid_level": "raid5f", 00:16:52.399 "superblock": true, 00:16:52.399 "num_base_bdevs": 4, 00:16:52.399 "num_base_bdevs_discovered": 4, 00:16:52.399 "num_base_bdevs_operational": 4, 00:16:52.400 "base_bdevs_list": [ 00:16:52.400 { 00:16:52.400 "name": "BaseBdev1", 00:16:52.400 "uuid": "d90994ee-b9f7-44e8-90d3-3a35368b76c1", 00:16:52.400 "is_configured": true, 00:16:52.400 "data_offset": 2048, 00:16:52.400 "data_size": 63488 00:16:52.400 }, 00:16:52.400 { 00:16:52.400 "name": "BaseBdev2", 00:16:52.400 "uuid": "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10", 00:16:52.400 "is_configured": true, 00:16:52.400 "data_offset": 2048, 00:16:52.400 "data_size": 63488 00:16:52.400 }, 00:16:52.400 { 00:16:52.400 "name": "BaseBdev3", 00:16:52.400 "uuid": "76e00026-70cd-42bd-a479-726235f3b3d8", 00:16:52.400 "is_configured": true, 00:16:52.400 "data_offset": 2048, 00:16:52.400 "data_size": 63488 00:16:52.400 }, 00:16:52.400 { 00:16:52.400 "name": "BaseBdev4", 00:16:52.400 "uuid": "dcf2f26a-af25-4e39-bb3d-b73dde999c0b", 00:16:52.400 "is_configured": true, 00:16:52.400 "data_offset": 2048, 00:16:52.400 "data_size": 63488 00:16:52.400 } 00:16:52.400 ] 00:16:52.400 } 00:16:52.400 } 00:16:52.400 }' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:52.400 BaseBdev2 00:16:52.400 BaseBdev3 00:16:52.400 BaseBdev4' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.400 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 [2024-11-20 17:51:19.598157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.661 "name": "Existed_Raid", 00:16:52.661 "uuid": "d78ae318-3d69-4569-b805-8f0e18f3d288", 00:16:52.661 "strip_size_kb": 64, 00:16:52.661 "state": "online", 00:16:52.661 "raid_level": "raid5f", 00:16:52.661 "superblock": true, 00:16:52.661 "num_base_bdevs": 4, 00:16:52.661 "num_base_bdevs_discovered": 3, 00:16:52.661 "num_base_bdevs_operational": 3, 00:16:52.661 "base_bdevs_list": [ 00:16:52.661 { 00:16:52.661 "name": null, 00:16:52.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.661 "is_configured": false, 00:16:52.661 "data_offset": 0, 00:16:52.661 "data_size": 63488 00:16:52.661 }, 00:16:52.661 { 00:16:52.661 "name": "BaseBdev2", 00:16:52.661 "uuid": "3bc2cf8b-10cb-4e0e-844b-3cd224ef4a10", 00:16:52.661 "is_configured": true, 00:16:52.661 "data_offset": 2048, 00:16:52.661 "data_size": 63488 00:16:52.661 }, 00:16:52.661 { 00:16:52.661 "name": "BaseBdev3", 00:16:52.661 "uuid": "76e00026-70cd-42bd-a479-726235f3b3d8", 00:16:52.661 "is_configured": true, 00:16:52.661 "data_offset": 2048, 00:16:52.661 "data_size": 63488 00:16:52.661 }, 00:16:52.661 { 00:16:52.661 "name": "BaseBdev4", 00:16:52.661 "uuid": "dcf2f26a-af25-4e39-bb3d-b73dde999c0b", 00:16:52.661 "is_configured": true, 00:16:52.661 "data_offset": 2048, 00:16:52.661 "data_size": 63488 00:16:52.661 } 00:16:52.661 ] 00:16:52.661 }' 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.661 17:51:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.232 [2024-11-20 17:51:20.184144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.232 [2024-11-20 17:51:20.184340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.232 [2024-11-20 17:51:20.286452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.232 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.232 [2024-11-20 17:51:20.342370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.494 [2024-11-20 17:51:20.500926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:53.494 [2024-11-20 17:51:20.500990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.494 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.783 BaseBdev2 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.783 [ 00:16:53.783 { 00:16:53.783 "name": "BaseBdev2", 00:16:53.783 "aliases": [ 00:16:53.783 "33c97a24-8c7b-40a8-9a91-91b34e934687" 00:16:53.783 ], 00:16:53.783 "product_name": "Malloc disk", 00:16:53.783 "block_size": 512, 00:16:53.783 "num_blocks": 65536, 00:16:53.783 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:53.783 "assigned_rate_limits": { 00:16:53.783 "rw_ios_per_sec": 0, 00:16:53.783 "rw_mbytes_per_sec": 0, 00:16:53.783 "r_mbytes_per_sec": 0, 00:16:53.783 "w_mbytes_per_sec": 0 00:16:53.783 }, 00:16:53.783 "claimed": false, 00:16:53.783 "zoned": false, 00:16:53.783 "supported_io_types": { 00:16:53.783 "read": true, 00:16:53.783 "write": true, 00:16:53.783 "unmap": true, 00:16:53.783 "flush": true, 00:16:53.783 "reset": true, 00:16:53.783 "nvme_admin": false, 00:16:53.783 "nvme_io": false, 00:16:53.783 "nvme_io_md": false, 00:16:53.783 "write_zeroes": true, 00:16:53.783 "zcopy": true, 00:16:53.783 "get_zone_info": false, 00:16:53.783 "zone_management": false, 00:16:53.783 "zone_append": false, 00:16:53.783 "compare": false, 00:16:53.783 "compare_and_write": false, 00:16:53.783 "abort": true, 00:16:53.783 "seek_hole": false, 00:16:53.783 "seek_data": false, 00:16:53.783 "copy": true, 00:16:53.783 "nvme_iov_md": false 00:16:53.783 }, 00:16:53.783 "memory_domains": [ 00:16:53.783 { 00:16:53.783 "dma_device_id": "system", 00:16:53.783 "dma_device_type": 1 00:16:53.783 }, 00:16:53.783 { 00:16:53.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.783 "dma_device_type": 2 00:16:53.783 } 00:16:53.783 ], 00:16:53.783 "driver_specific": {} 00:16:53.783 } 00:16:53.783 ] 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.783 BaseBdev3 00:16:53.783 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 [ 00:16:53.784 { 00:16:53.784 "name": "BaseBdev3", 00:16:53.784 "aliases": [ 00:16:53.784 "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66" 00:16:53.784 ], 00:16:53.784 "product_name": "Malloc disk", 00:16:53.784 "block_size": 512, 00:16:53.784 "num_blocks": 65536, 00:16:53.784 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:53.784 "assigned_rate_limits": { 00:16:53.784 "rw_ios_per_sec": 0, 00:16:53.784 "rw_mbytes_per_sec": 0, 00:16:53.784 "r_mbytes_per_sec": 0, 00:16:53.784 "w_mbytes_per_sec": 0 00:16:53.784 }, 00:16:53.784 "claimed": false, 00:16:53.784 "zoned": false, 00:16:53.784 "supported_io_types": { 00:16:53.784 "read": true, 00:16:53.784 "write": true, 00:16:53.784 "unmap": true, 00:16:53.784 "flush": true, 00:16:53.784 "reset": true, 00:16:53.784 "nvme_admin": false, 00:16:53.784 "nvme_io": false, 00:16:53.784 "nvme_io_md": false, 00:16:53.784 "write_zeroes": true, 00:16:53.784 "zcopy": true, 00:16:53.784 "get_zone_info": false, 00:16:53.784 "zone_management": false, 00:16:53.784 "zone_append": false, 00:16:53.784 "compare": false, 00:16:53.784 "compare_and_write": false, 00:16:53.784 "abort": true, 00:16:53.784 "seek_hole": false, 00:16:53.784 "seek_data": false, 00:16:53.784 "copy": true, 00:16:53.784 "nvme_iov_md": false 00:16:53.784 }, 00:16:53.784 "memory_domains": [ 00:16:53.784 { 00:16:53.784 "dma_device_id": "system", 00:16:53.784 "dma_device_type": 1 00:16:53.784 }, 00:16:53.784 { 00:16:53.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.784 "dma_device_type": 2 00:16:53.784 } 00:16:53.784 ], 00:16:53.784 "driver_specific": {} 00:16:53.784 } 00:16:53.784 ] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 BaseBdev4 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 [ 00:16:53.784 { 00:16:53.784 "name": "BaseBdev4", 00:16:53.784 "aliases": [ 00:16:53.784 "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc" 00:16:53.784 ], 00:16:53.784 "product_name": "Malloc disk", 00:16:53.784 "block_size": 512, 00:16:53.784 "num_blocks": 65536, 00:16:53.784 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:53.784 "assigned_rate_limits": { 00:16:53.784 "rw_ios_per_sec": 0, 00:16:53.784 "rw_mbytes_per_sec": 0, 00:16:53.784 "r_mbytes_per_sec": 0, 00:16:53.784 "w_mbytes_per_sec": 0 00:16:53.784 }, 00:16:53.784 "claimed": false, 00:16:53.784 "zoned": false, 00:16:53.784 "supported_io_types": { 00:16:53.784 "read": true, 00:16:53.784 "write": true, 00:16:53.784 "unmap": true, 00:16:53.784 "flush": true, 00:16:53.784 "reset": true, 00:16:53.784 "nvme_admin": false, 00:16:53.784 "nvme_io": false, 00:16:53.784 "nvme_io_md": false, 00:16:53.784 "write_zeroes": true, 00:16:53.784 "zcopy": true, 00:16:53.784 "get_zone_info": false, 00:16:53.784 "zone_management": false, 00:16:53.784 "zone_append": false, 00:16:53.784 "compare": false, 00:16:53.784 "compare_and_write": false, 00:16:53.784 "abort": true, 00:16:53.784 "seek_hole": false, 00:16:53.784 "seek_data": false, 00:16:53.784 "copy": true, 00:16:53.784 "nvme_iov_md": false 00:16:53.784 }, 00:16:53.784 "memory_domains": [ 00:16:53.784 { 00:16:53.784 "dma_device_id": "system", 00:16:53.784 "dma_device_type": 1 00:16:53.784 }, 00:16:53.784 { 00:16:53.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.784 "dma_device_type": 2 00:16:53.784 } 00:16:53.784 ], 00:16:53.784 "driver_specific": {} 00:16:53.784 } 00:16:53.784 ] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 [2024-11-20 17:51:20.898394] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.784 [2024-11-20 17:51:20.898482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.784 [2024-11-20 17:51:20.898527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.784 [2024-11-20 17:51:20.900627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.784 [2024-11-20 17:51:20.900718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.784 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.058 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.058 "name": "Existed_Raid", 00:16:54.058 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:54.058 "strip_size_kb": 64, 00:16:54.058 "state": "configuring", 00:16:54.058 "raid_level": "raid5f", 00:16:54.058 "superblock": true, 00:16:54.058 "num_base_bdevs": 4, 00:16:54.058 "num_base_bdevs_discovered": 3, 00:16:54.058 "num_base_bdevs_operational": 4, 00:16:54.058 "base_bdevs_list": [ 00:16:54.058 { 00:16:54.058 "name": "BaseBdev1", 00:16:54.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.058 "is_configured": false, 00:16:54.058 "data_offset": 0, 00:16:54.058 "data_size": 0 00:16:54.058 }, 00:16:54.058 { 00:16:54.058 "name": "BaseBdev2", 00:16:54.058 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:54.058 "is_configured": true, 00:16:54.058 "data_offset": 2048, 00:16:54.058 "data_size": 63488 00:16:54.058 }, 00:16:54.058 { 00:16:54.058 "name": "BaseBdev3", 00:16:54.058 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:54.058 "is_configured": true, 00:16:54.058 "data_offset": 2048, 00:16:54.058 "data_size": 63488 00:16:54.058 }, 00:16:54.058 { 00:16:54.058 "name": "BaseBdev4", 00:16:54.058 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:54.058 "is_configured": true, 00:16:54.058 "data_offset": 2048, 00:16:54.058 "data_size": 63488 00:16:54.058 } 00:16:54.058 ] 00:16:54.058 }' 00:16:54.058 17:51:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.058 17:51:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 [2024-11-20 17:51:21.333651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.320 "name": "Existed_Raid", 00:16:54.320 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:54.320 "strip_size_kb": 64, 00:16:54.320 "state": "configuring", 00:16:54.320 "raid_level": "raid5f", 00:16:54.320 "superblock": true, 00:16:54.320 "num_base_bdevs": 4, 00:16:54.320 "num_base_bdevs_discovered": 2, 00:16:54.320 "num_base_bdevs_operational": 4, 00:16:54.320 "base_bdevs_list": [ 00:16:54.320 { 00:16:54.320 "name": "BaseBdev1", 00:16:54.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.320 "is_configured": false, 00:16:54.320 "data_offset": 0, 00:16:54.320 "data_size": 0 00:16:54.320 }, 00:16:54.320 { 00:16:54.320 "name": null, 00:16:54.320 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:54.320 "is_configured": false, 00:16:54.320 "data_offset": 0, 00:16:54.320 "data_size": 63488 00:16:54.320 }, 00:16:54.320 { 00:16:54.320 "name": "BaseBdev3", 00:16:54.320 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:54.320 "is_configured": true, 00:16:54.320 "data_offset": 2048, 00:16:54.320 "data_size": 63488 00:16:54.320 }, 00:16:54.320 { 00:16:54.320 "name": "BaseBdev4", 00:16:54.320 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:54.320 "is_configured": true, 00:16:54.320 "data_offset": 2048, 00:16:54.320 "data_size": 63488 00:16:54.320 } 00:16:54.320 ] 00:16:54.320 }' 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.320 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.580 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.580 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:54.580 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.580 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.580 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.839 BaseBdev1 00:16:54.839 [2024-11-20 17:51:21.814379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.839 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.840 [ 00:16:54.840 { 00:16:54.840 "name": "BaseBdev1", 00:16:54.840 "aliases": [ 00:16:54.840 "abbbd82c-1906-42ce-b6b1-7f3c78a050f7" 00:16:54.840 ], 00:16:54.840 "product_name": "Malloc disk", 00:16:54.840 "block_size": 512, 00:16:54.840 "num_blocks": 65536, 00:16:54.840 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:54.840 "assigned_rate_limits": { 00:16:54.840 "rw_ios_per_sec": 0, 00:16:54.840 "rw_mbytes_per_sec": 0, 00:16:54.840 "r_mbytes_per_sec": 0, 00:16:54.840 "w_mbytes_per_sec": 0 00:16:54.840 }, 00:16:54.840 "claimed": true, 00:16:54.840 "claim_type": "exclusive_write", 00:16:54.840 "zoned": false, 00:16:54.840 "supported_io_types": { 00:16:54.840 "read": true, 00:16:54.840 "write": true, 00:16:54.840 "unmap": true, 00:16:54.840 "flush": true, 00:16:54.840 "reset": true, 00:16:54.840 "nvme_admin": false, 00:16:54.840 "nvme_io": false, 00:16:54.840 "nvme_io_md": false, 00:16:54.840 "write_zeroes": true, 00:16:54.840 "zcopy": true, 00:16:54.840 "get_zone_info": false, 00:16:54.840 "zone_management": false, 00:16:54.840 "zone_append": false, 00:16:54.840 "compare": false, 00:16:54.840 "compare_and_write": false, 00:16:54.840 "abort": true, 00:16:54.840 "seek_hole": false, 00:16:54.840 "seek_data": false, 00:16:54.840 "copy": true, 00:16:54.840 "nvme_iov_md": false 00:16:54.840 }, 00:16:54.840 "memory_domains": [ 00:16:54.840 { 00:16:54.840 "dma_device_id": "system", 00:16:54.840 "dma_device_type": 1 00:16:54.840 }, 00:16:54.840 { 00:16:54.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.840 "dma_device_type": 2 00:16:54.840 } 00:16:54.840 ], 00:16:54.840 "driver_specific": {} 00:16:54.840 } 00:16:54.840 ] 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.840 "name": "Existed_Raid", 00:16:54.840 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:54.840 "strip_size_kb": 64, 00:16:54.840 "state": "configuring", 00:16:54.840 "raid_level": "raid5f", 00:16:54.840 "superblock": true, 00:16:54.840 "num_base_bdevs": 4, 00:16:54.840 "num_base_bdevs_discovered": 3, 00:16:54.840 "num_base_bdevs_operational": 4, 00:16:54.840 "base_bdevs_list": [ 00:16:54.840 { 00:16:54.840 "name": "BaseBdev1", 00:16:54.840 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:54.840 "is_configured": true, 00:16:54.840 "data_offset": 2048, 00:16:54.840 "data_size": 63488 00:16:54.840 }, 00:16:54.840 { 00:16:54.840 "name": null, 00:16:54.840 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:54.840 "is_configured": false, 00:16:54.840 "data_offset": 0, 00:16:54.840 "data_size": 63488 00:16:54.840 }, 00:16:54.840 { 00:16:54.840 "name": "BaseBdev3", 00:16:54.840 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:54.840 "is_configured": true, 00:16:54.840 "data_offset": 2048, 00:16:54.840 "data_size": 63488 00:16:54.840 }, 00:16:54.840 { 00:16:54.840 "name": "BaseBdev4", 00:16:54.840 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:54.840 "is_configured": true, 00:16:54.840 "data_offset": 2048, 00:16:54.840 "data_size": 63488 00:16:54.840 } 00:16:54.840 ] 00:16:54.840 }' 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.840 17:51:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.410 [2024-11-20 17:51:22.349592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.410 "name": "Existed_Raid", 00:16:55.410 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:55.410 "strip_size_kb": 64, 00:16:55.410 "state": "configuring", 00:16:55.410 "raid_level": "raid5f", 00:16:55.410 "superblock": true, 00:16:55.410 "num_base_bdevs": 4, 00:16:55.410 "num_base_bdevs_discovered": 2, 00:16:55.410 "num_base_bdevs_operational": 4, 00:16:55.410 "base_bdevs_list": [ 00:16:55.410 { 00:16:55.410 "name": "BaseBdev1", 00:16:55.410 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:55.410 "is_configured": true, 00:16:55.410 "data_offset": 2048, 00:16:55.410 "data_size": 63488 00:16:55.410 }, 00:16:55.410 { 00:16:55.410 "name": null, 00:16:55.410 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:55.410 "is_configured": false, 00:16:55.410 "data_offset": 0, 00:16:55.410 "data_size": 63488 00:16:55.410 }, 00:16:55.410 { 00:16:55.410 "name": null, 00:16:55.410 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:55.410 "is_configured": false, 00:16:55.410 "data_offset": 0, 00:16:55.410 "data_size": 63488 00:16:55.410 }, 00:16:55.410 { 00:16:55.410 "name": "BaseBdev4", 00:16:55.410 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:55.410 "is_configured": true, 00:16:55.410 "data_offset": 2048, 00:16:55.410 "data_size": 63488 00:16:55.410 } 00:16:55.410 ] 00:16:55.410 }' 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.410 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 [2024-11-20 17:51:22.816837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.671 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.931 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.931 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.931 "name": "Existed_Raid", 00:16:55.931 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:55.931 "strip_size_kb": 64, 00:16:55.931 "state": "configuring", 00:16:55.931 "raid_level": "raid5f", 00:16:55.931 "superblock": true, 00:16:55.931 "num_base_bdevs": 4, 00:16:55.931 "num_base_bdevs_discovered": 3, 00:16:55.931 "num_base_bdevs_operational": 4, 00:16:55.931 "base_bdevs_list": [ 00:16:55.931 { 00:16:55.931 "name": "BaseBdev1", 00:16:55.931 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:55.931 "is_configured": true, 00:16:55.931 "data_offset": 2048, 00:16:55.931 "data_size": 63488 00:16:55.931 }, 00:16:55.931 { 00:16:55.931 "name": null, 00:16:55.931 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:55.931 "is_configured": false, 00:16:55.931 "data_offset": 0, 00:16:55.931 "data_size": 63488 00:16:55.931 }, 00:16:55.931 { 00:16:55.931 "name": "BaseBdev3", 00:16:55.931 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:55.931 "is_configured": true, 00:16:55.931 "data_offset": 2048, 00:16:55.931 "data_size": 63488 00:16:55.931 }, 00:16:55.931 { 00:16:55.931 "name": "BaseBdev4", 00:16:55.931 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:55.931 "is_configured": true, 00:16:55.931 "data_offset": 2048, 00:16:55.931 "data_size": 63488 00:16:55.931 } 00:16:55.931 ] 00:16:55.931 }' 00:16:55.931 17:51:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.931 17:51:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.190 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.190 [2024-11-20 17:51:23.288098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.449 "name": "Existed_Raid", 00:16:56.449 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:56.449 "strip_size_kb": 64, 00:16:56.449 "state": "configuring", 00:16:56.449 "raid_level": "raid5f", 00:16:56.449 "superblock": true, 00:16:56.449 "num_base_bdevs": 4, 00:16:56.449 "num_base_bdevs_discovered": 2, 00:16:56.449 "num_base_bdevs_operational": 4, 00:16:56.449 "base_bdevs_list": [ 00:16:56.449 { 00:16:56.449 "name": null, 00:16:56.449 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:56.449 "is_configured": false, 00:16:56.449 "data_offset": 0, 00:16:56.449 "data_size": 63488 00:16:56.449 }, 00:16:56.449 { 00:16:56.449 "name": null, 00:16:56.449 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:56.449 "is_configured": false, 00:16:56.449 "data_offset": 0, 00:16:56.449 "data_size": 63488 00:16:56.449 }, 00:16:56.449 { 00:16:56.449 "name": "BaseBdev3", 00:16:56.449 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:56.449 "is_configured": true, 00:16:56.449 "data_offset": 2048, 00:16:56.449 "data_size": 63488 00:16:56.449 }, 00:16:56.449 { 00:16:56.449 "name": "BaseBdev4", 00:16:56.449 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:56.449 "is_configured": true, 00:16:56.449 "data_offset": 2048, 00:16:56.449 "data_size": 63488 00:16:56.449 } 00:16:56.449 ] 00:16:56.449 }' 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.449 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.709 [2024-11-20 17:51:23.873113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.709 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.970 "name": "Existed_Raid", 00:16:56.970 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:56.970 "strip_size_kb": 64, 00:16:56.970 "state": "configuring", 00:16:56.970 "raid_level": "raid5f", 00:16:56.970 "superblock": true, 00:16:56.970 "num_base_bdevs": 4, 00:16:56.970 "num_base_bdevs_discovered": 3, 00:16:56.970 "num_base_bdevs_operational": 4, 00:16:56.970 "base_bdevs_list": [ 00:16:56.970 { 00:16:56.970 "name": null, 00:16:56.970 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:56.970 "is_configured": false, 00:16:56.970 "data_offset": 0, 00:16:56.970 "data_size": 63488 00:16:56.970 }, 00:16:56.970 { 00:16:56.970 "name": "BaseBdev2", 00:16:56.970 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:56.970 "is_configured": true, 00:16:56.970 "data_offset": 2048, 00:16:56.970 "data_size": 63488 00:16:56.970 }, 00:16:56.970 { 00:16:56.970 "name": "BaseBdev3", 00:16:56.970 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:56.970 "is_configured": true, 00:16:56.970 "data_offset": 2048, 00:16:56.970 "data_size": 63488 00:16:56.970 }, 00:16:56.970 { 00:16:56.970 "name": "BaseBdev4", 00:16:56.970 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:56.970 "is_configured": true, 00:16:56.970 "data_offset": 2048, 00:16:56.970 "data_size": 63488 00:16:56.970 } 00:16:56.970 ] 00:16:56.970 }' 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.970 17:51:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abbbd82c-1906-42ce-b6b1-7f3c78a050f7 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 [2024-11-20 17:51:24.394813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:57.231 [2024-11-20 17:51:24.395154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:57.231 [2024-11-20 17:51:24.395202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.231 [2024-11-20 17:51:24.395513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:57.231 NewBaseBdev 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.231 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 [2024-11-20 17:51:24.402337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:57.231 [2024-11-20 17:51:24.402418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:57.231 [2024-11-20 17:51:24.402601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.492 [ 00:16:57.492 { 00:16:57.492 "name": "NewBaseBdev", 00:16:57.492 "aliases": [ 00:16:57.492 "abbbd82c-1906-42ce-b6b1-7f3c78a050f7" 00:16:57.492 ], 00:16:57.492 "product_name": "Malloc disk", 00:16:57.492 "block_size": 512, 00:16:57.492 "num_blocks": 65536, 00:16:57.492 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:57.492 "assigned_rate_limits": { 00:16:57.492 "rw_ios_per_sec": 0, 00:16:57.492 "rw_mbytes_per_sec": 0, 00:16:57.492 "r_mbytes_per_sec": 0, 00:16:57.492 "w_mbytes_per_sec": 0 00:16:57.492 }, 00:16:57.492 "claimed": true, 00:16:57.492 "claim_type": "exclusive_write", 00:16:57.492 "zoned": false, 00:16:57.492 "supported_io_types": { 00:16:57.492 "read": true, 00:16:57.492 "write": true, 00:16:57.492 "unmap": true, 00:16:57.492 "flush": true, 00:16:57.492 "reset": true, 00:16:57.492 "nvme_admin": false, 00:16:57.492 "nvme_io": false, 00:16:57.492 "nvme_io_md": false, 00:16:57.492 "write_zeroes": true, 00:16:57.492 "zcopy": true, 00:16:57.492 "get_zone_info": false, 00:16:57.492 "zone_management": false, 00:16:57.492 "zone_append": false, 00:16:57.492 "compare": false, 00:16:57.492 "compare_and_write": false, 00:16:57.492 "abort": true, 00:16:57.492 "seek_hole": false, 00:16:57.492 "seek_data": false, 00:16:57.492 "copy": true, 00:16:57.492 "nvme_iov_md": false 00:16:57.492 }, 00:16:57.492 "memory_domains": [ 00:16:57.492 { 00:16:57.492 "dma_device_id": "system", 00:16:57.492 "dma_device_type": 1 00:16:57.492 }, 00:16:57.492 { 00:16:57.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.492 "dma_device_type": 2 00:16:57.492 } 00:16:57.492 ], 00:16:57.492 "driver_specific": {} 00:16:57.492 } 00:16:57.492 ] 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.492 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.492 "name": "Existed_Raid", 00:16:57.492 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:57.492 "strip_size_kb": 64, 00:16:57.492 "state": "online", 00:16:57.492 "raid_level": "raid5f", 00:16:57.492 "superblock": true, 00:16:57.492 "num_base_bdevs": 4, 00:16:57.492 "num_base_bdevs_discovered": 4, 00:16:57.492 "num_base_bdevs_operational": 4, 00:16:57.492 "base_bdevs_list": [ 00:16:57.492 { 00:16:57.492 "name": "NewBaseBdev", 00:16:57.492 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:57.492 "is_configured": true, 00:16:57.492 "data_offset": 2048, 00:16:57.492 "data_size": 63488 00:16:57.492 }, 00:16:57.492 { 00:16:57.492 "name": "BaseBdev2", 00:16:57.492 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:57.492 "is_configured": true, 00:16:57.492 "data_offset": 2048, 00:16:57.492 "data_size": 63488 00:16:57.492 }, 00:16:57.492 { 00:16:57.492 "name": "BaseBdev3", 00:16:57.492 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:57.492 "is_configured": true, 00:16:57.492 "data_offset": 2048, 00:16:57.492 "data_size": 63488 00:16:57.492 }, 00:16:57.492 { 00:16:57.492 "name": "BaseBdev4", 00:16:57.492 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:57.492 "is_configured": true, 00:16:57.493 "data_offset": 2048, 00:16:57.493 "data_size": 63488 00:16:57.493 } 00:16:57.493 ] 00:16:57.493 }' 00:16:57.493 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.493 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.753 [2024-11-20 17:51:24.906757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.753 17:51:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.015 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.015 "name": "Existed_Raid", 00:16:58.015 "aliases": [ 00:16:58.015 "74818876-f4c7-4563-a518-6806d2605e63" 00:16:58.015 ], 00:16:58.015 "product_name": "Raid Volume", 00:16:58.015 "block_size": 512, 00:16:58.015 "num_blocks": 190464, 00:16:58.015 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:58.015 "assigned_rate_limits": { 00:16:58.015 "rw_ios_per_sec": 0, 00:16:58.015 "rw_mbytes_per_sec": 0, 00:16:58.015 "r_mbytes_per_sec": 0, 00:16:58.015 "w_mbytes_per_sec": 0 00:16:58.015 }, 00:16:58.015 "claimed": false, 00:16:58.015 "zoned": false, 00:16:58.015 "supported_io_types": { 00:16:58.015 "read": true, 00:16:58.015 "write": true, 00:16:58.015 "unmap": false, 00:16:58.015 "flush": false, 00:16:58.015 "reset": true, 00:16:58.015 "nvme_admin": false, 00:16:58.015 "nvme_io": false, 00:16:58.015 "nvme_io_md": false, 00:16:58.015 "write_zeroes": true, 00:16:58.015 "zcopy": false, 00:16:58.015 "get_zone_info": false, 00:16:58.015 "zone_management": false, 00:16:58.015 "zone_append": false, 00:16:58.015 "compare": false, 00:16:58.015 "compare_and_write": false, 00:16:58.015 "abort": false, 00:16:58.015 "seek_hole": false, 00:16:58.015 "seek_data": false, 00:16:58.015 "copy": false, 00:16:58.015 "nvme_iov_md": false 00:16:58.015 }, 00:16:58.015 "driver_specific": { 00:16:58.015 "raid": { 00:16:58.015 "uuid": "74818876-f4c7-4563-a518-6806d2605e63", 00:16:58.015 "strip_size_kb": 64, 00:16:58.015 "state": "online", 00:16:58.015 "raid_level": "raid5f", 00:16:58.015 "superblock": true, 00:16:58.015 "num_base_bdevs": 4, 00:16:58.015 "num_base_bdevs_discovered": 4, 00:16:58.015 "num_base_bdevs_operational": 4, 00:16:58.015 "base_bdevs_list": [ 00:16:58.015 { 00:16:58.015 "name": "NewBaseBdev", 00:16:58.015 "uuid": "abbbd82c-1906-42ce-b6b1-7f3c78a050f7", 00:16:58.015 "is_configured": true, 00:16:58.015 "data_offset": 2048, 00:16:58.015 "data_size": 63488 00:16:58.015 }, 00:16:58.015 { 00:16:58.015 "name": "BaseBdev2", 00:16:58.015 "uuid": "33c97a24-8c7b-40a8-9a91-91b34e934687", 00:16:58.015 "is_configured": true, 00:16:58.015 "data_offset": 2048, 00:16:58.015 "data_size": 63488 00:16:58.015 }, 00:16:58.015 { 00:16:58.015 "name": "BaseBdev3", 00:16:58.015 "uuid": "ab1c1abf-0cb4-4d7b-991f-0c5dddd31e66", 00:16:58.015 "is_configured": true, 00:16:58.015 "data_offset": 2048, 00:16:58.015 "data_size": 63488 00:16:58.015 }, 00:16:58.015 { 00:16:58.015 "name": "BaseBdev4", 00:16:58.015 "uuid": "e44c8f0d-5a0a-49a6-8d45-19d3f95388cc", 00:16:58.015 "is_configured": true, 00:16:58.015 "data_offset": 2048, 00:16:58.015 "data_size": 63488 00:16:58.015 } 00:16:58.015 ] 00:16:58.015 } 00:16:58.015 } 00:16:58.015 }' 00:16:58.015 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.015 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:58.015 BaseBdev2 00:16:58.015 BaseBdev3 00:16:58.015 BaseBdev4' 00:16:58.015 17:51:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.015 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.275 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.275 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.276 [2024-11-20 17:51:25.237959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.276 [2024-11-20 17:51:25.238054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.276 [2024-11-20 17:51:25.238162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.276 [2024-11-20 17:51:25.238497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.276 [2024-11-20 17:51:25.238551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83914 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83914 ']' 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83914 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83914 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.276 killing process with pid 83914 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83914' 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83914 00:16:58.276 [2024-11-20 17:51:25.287371] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.276 17:51:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83914 00:16:58.845 [2024-11-20 17:51:25.709927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.785 17:51:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:59.785 00:16:59.785 real 0m11.413s 00:16:59.785 user 0m17.781s 00:16:59.785 sys 0m2.206s 00:16:59.785 17:51:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.785 17:51:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.785 ************************************ 00:16:59.785 END TEST raid5f_state_function_test_sb 00:16:59.785 ************************************ 00:17:00.045 17:51:26 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:00.045 17:51:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:00.045 17:51:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.045 17:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.045 ************************************ 00:17:00.045 START TEST raid5f_superblock_test 00:17:00.045 ************************************ 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84579 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84579 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84579 ']' 00:17:00.045 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.046 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.046 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.046 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.046 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.046 [2024-11-20 17:51:27.110518] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:00.046 [2024-11-20 17:51:27.110721] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84579 ] 00:17:00.306 [2024-11-20 17:51:27.283582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.306 [2024-11-20 17:51:27.424139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.566 [2024-11-20 17:51:27.651981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.566 [2024-11-20 17:51:27.652138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.826 malloc1 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.826 [2024-11-20 17:51:27.981617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.826 [2024-11-20 17:51:27.981782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.826 [2024-11-20 17:51:27.981826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:00.826 [2024-11-20 17:51:27.981857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.826 [2024-11-20 17:51:27.984290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.826 [2024-11-20 17:51:27.984363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.826 pt1 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.826 17:51:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 malloc2 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 [2024-11-20 17:51:28.048396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.087 [2024-11-20 17:51:28.048460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.087 [2024-11-20 17:51:28.048488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.087 [2024-11-20 17:51:28.048497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.087 [2024-11-20 17:51:28.050889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.087 [2024-11-20 17:51:28.050924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.087 pt2 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 malloc3 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 [2024-11-20 17:51:28.122000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.087 [2024-11-20 17:51:28.122139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.087 [2024-11-20 17:51:28.122181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.087 [2024-11-20 17:51:28.122208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.087 [2024-11-20 17:51:28.124632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.087 [2024-11-20 17:51:28.124718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.087 pt3 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 malloc4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 [2024-11-20 17:51:28.185965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:01.087 [2024-11-20 17:51:28.186129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.087 [2024-11-20 17:51:28.186188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:01.087 [2024-11-20 17:51:28.186223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.087 [2024-11-20 17:51:28.188584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.087 [2024-11-20 17:51:28.188654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:01.087 pt4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 [2024-11-20 17:51:28.197986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.087 [2024-11-20 17:51:28.200164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.087 [2024-11-20 17:51:28.200247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.087 [2024-11-20 17:51:28.200294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:01.087 [2024-11-20 17:51:28.200487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:01.087 [2024-11-20 17:51:28.200502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.087 [2024-11-20 17:51:28.200746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:01.087 [2024-11-20 17:51:28.207877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:01.087 [2024-11-20 17:51:28.207934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:01.087 [2024-11-20 17:51:28.208197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.347 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.347 "name": "raid_bdev1", 00:17:01.347 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:01.347 "strip_size_kb": 64, 00:17:01.347 "state": "online", 00:17:01.347 "raid_level": "raid5f", 00:17:01.347 "superblock": true, 00:17:01.347 "num_base_bdevs": 4, 00:17:01.347 "num_base_bdevs_discovered": 4, 00:17:01.347 "num_base_bdevs_operational": 4, 00:17:01.347 "base_bdevs_list": [ 00:17:01.347 { 00:17:01.347 "name": "pt1", 00:17:01.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.347 "is_configured": true, 00:17:01.347 "data_offset": 2048, 00:17:01.347 "data_size": 63488 00:17:01.347 }, 00:17:01.347 { 00:17:01.347 "name": "pt2", 00:17:01.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.347 "is_configured": true, 00:17:01.347 "data_offset": 2048, 00:17:01.347 "data_size": 63488 00:17:01.347 }, 00:17:01.347 { 00:17:01.347 "name": "pt3", 00:17:01.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.347 "is_configured": true, 00:17:01.347 "data_offset": 2048, 00:17:01.347 "data_size": 63488 00:17:01.347 }, 00:17:01.347 { 00:17:01.347 "name": "pt4", 00:17:01.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.347 "is_configured": true, 00:17:01.347 "data_offset": 2048, 00:17:01.347 "data_size": 63488 00:17:01.347 } 00:17:01.347 ] 00:17:01.347 }' 00:17:01.347 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.347 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.606 [2024-11-20 17:51:28.689296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.606 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:01.606 "name": "raid_bdev1", 00:17:01.606 "aliases": [ 00:17:01.606 "c0d59e8e-6d91-4f45-b328-c087a0aa727b" 00:17:01.606 ], 00:17:01.606 "product_name": "Raid Volume", 00:17:01.606 "block_size": 512, 00:17:01.606 "num_blocks": 190464, 00:17:01.606 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:01.606 "assigned_rate_limits": { 00:17:01.606 "rw_ios_per_sec": 0, 00:17:01.606 "rw_mbytes_per_sec": 0, 00:17:01.606 "r_mbytes_per_sec": 0, 00:17:01.606 "w_mbytes_per_sec": 0 00:17:01.606 }, 00:17:01.606 "claimed": false, 00:17:01.606 "zoned": false, 00:17:01.606 "supported_io_types": { 00:17:01.606 "read": true, 00:17:01.606 "write": true, 00:17:01.606 "unmap": false, 00:17:01.606 "flush": false, 00:17:01.606 "reset": true, 00:17:01.606 "nvme_admin": false, 00:17:01.606 "nvme_io": false, 00:17:01.606 "nvme_io_md": false, 00:17:01.606 "write_zeroes": true, 00:17:01.606 "zcopy": false, 00:17:01.606 "get_zone_info": false, 00:17:01.606 "zone_management": false, 00:17:01.606 "zone_append": false, 00:17:01.606 "compare": false, 00:17:01.606 "compare_and_write": false, 00:17:01.606 "abort": false, 00:17:01.606 "seek_hole": false, 00:17:01.606 "seek_data": false, 00:17:01.606 "copy": false, 00:17:01.606 "nvme_iov_md": false 00:17:01.606 }, 00:17:01.606 "driver_specific": { 00:17:01.606 "raid": { 00:17:01.606 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:01.606 "strip_size_kb": 64, 00:17:01.606 "state": "online", 00:17:01.607 "raid_level": "raid5f", 00:17:01.607 "superblock": true, 00:17:01.607 "num_base_bdevs": 4, 00:17:01.607 "num_base_bdevs_discovered": 4, 00:17:01.607 "num_base_bdevs_operational": 4, 00:17:01.607 "base_bdevs_list": [ 00:17:01.607 { 00:17:01.607 "name": "pt1", 00:17:01.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.607 "is_configured": true, 00:17:01.607 "data_offset": 2048, 00:17:01.607 "data_size": 63488 00:17:01.607 }, 00:17:01.607 { 00:17:01.607 "name": "pt2", 00:17:01.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.607 "is_configured": true, 00:17:01.607 "data_offset": 2048, 00:17:01.607 "data_size": 63488 00:17:01.607 }, 00:17:01.607 { 00:17:01.607 "name": "pt3", 00:17:01.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.607 "is_configured": true, 00:17:01.607 "data_offset": 2048, 00:17:01.607 "data_size": 63488 00:17:01.607 }, 00:17:01.607 { 00:17:01.607 "name": "pt4", 00:17:01.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.607 "is_configured": true, 00:17:01.607 "data_offset": 2048, 00:17:01.607 "data_size": 63488 00:17:01.607 } 00:17:01.607 ] 00:17:01.607 } 00:17:01.607 } 00:17:01.607 }' 00:17:01.607 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:01.607 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:01.607 pt2 00:17:01.607 pt3 00:17:01.607 pt4' 00:17:01.607 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.866 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.867 17:51:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.867 [2024-11-20 17:51:29.005179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.867 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c0d59e8e-6d91-4f45-b328-c087a0aa727b 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c0d59e8e-6d91-4f45-b328-c087a0aa727b ']' 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 [2024-11-20 17:51:29.049032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.127 [2024-11-20 17:51:29.049093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.127 [2024-11-20 17:51:29.049182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.127 [2024-11-20 17:51:29.049275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.127 [2024-11-20 17:51:29.049292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:02.127 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.128 [2024-11-20 17:51:29.221044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.128 [2024-11-20 17:51:29.223307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.128 [2024-11-20 17:51:29.223408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:02.128 [2024-11-20 17:51:29.223459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:02.128 [2024-11-20 17:51:29.223549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.128 [2024-11-20 17:51:29.223635] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.128 [2024-11-20 17:51:29.223680] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:02.128 [2024-11-20 17:51:29.223701] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:02.128 [2024-11-20 17:51:29.223713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.128 [2024-11-20 17:51:29.223724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:02.128 request: 00:17:02.128 { 00:17:02.128 "name": "raid_bdev1", 00:17:02.128 "raid_level": "raid5f", 00:17:02.128 "base_bdevs": [ 00:17:02.128 "malloc1", 00:17:02.128 "malloc2", 00:17:02.128 "malloc3", 00:17:02.128 "malloc4" 00:17:02.128 ], 00:17:02.128 "strip_size_kb": 64, 00:17:02.128 "superblock": false, 00:17:02.128 "method": "bdev_raid_create", 00:17:02.128 "req_id": 1 00:17:02.128 } 00:17:02.128 Got JSON-RPC error response 00:17:02.128 response: 00:17:02.128 { 00:17:02.128 "code": -17, 00:17:02.128 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.128 } 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.128 [2024-11-20 17:51:29.288870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.128 [2024-11-20 17:51:29.288980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.128 [2024-11-20 17:51:29.289012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:02.128 [2024-11-20 17:51:29.289052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.128 [2024-11-20 17:51:29.291506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.128 [2024-11-20 17:51:29.291593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.128 [2024-11-20 17:51:29.291694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.128 [2024-11-20 17:51:29.291781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.128 pt1 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.128 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.394 "name": "raid_bdev1", 00:17:02.394 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:02.394 "strip_size_kb": 64, 00:17:02.394 "state": "configuring", 00:17:02.394 "raid_level": "raid5f", 00:17:02.394 "superblock": true, 00:17:02.394 "num_base_bdevs": 4, 00:17:02.394 "num_base_bdevs_discovered": 1, 00:17:02.394 "num_base_bdevs_operational": 4, 00:17:02.394 "base_bdevs_list": [ 00:17:02.394 { 00:17:02.394 "name": "pt1", 00:17:02.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.394 "is_configured": true, 00:17:02.394 "data_offset": 2048, 00:17:02.394 "data_size": 63488 00:17:02.394 }, 00:17:02.394 { 00:17:02.394 "name": null, 00:17:02.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.394 "is_configured": false, 00:17:02.394 "data_offset": 2048, 00:17:02.394 "data_size": 63488 00:17:02.394 }, 00:17:02.394 { 00:17:02.394 "name": null, 00:17:02.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.394 "is_configured": false, 00:17:02.394 "data_offset": 2048, 00:17:02.394 "data_size": 63488 00:17:02.394 }, 00:17:02.394 { 00:17:02.394 "name": null, 00:17:02.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.394 "is_configured": false, 00:17:02.394 "data_offset": 2048, 00:17:02.394 "data_size": 63488 00:17:02.394 } 00:17:02.394 ] 00:17:02.394 }' 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.394 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.661 [2024-11-20 17:51:29.736124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.661 [2024-11-20 17:51:29.736192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.661 [2024-11-20 17:51:29.736212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:02.661 [2024-11-20 17:51:29.736223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.661 [2024-11-20 17:51:29.736683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.661 [2024-11-20 17:51:29.736708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.661 [2024-11-20 17:51:29.736802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:02.661 [2024-11-20 17:51:29.736828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.661 pt2 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.661 [2024-11-20 17:51:29.748117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.661 "name": "raid_bdev1", 00:17:02.661 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:02.661 "strip_size_kb": 64, 00:17:02.661 "state": "configuring", 00:17:02.661 "raid_level": "raid5f", 00:17:02.661 "superblock": true, 00:17:02.661 "num_base_bdevs": 4, 00:17:02.661 "num_base_bdevs_discovered": 1, 00:17:02.661 "num_base_bdevs_operational": 4, 00:17:02.661 "base_bdevs_list": [ 00:17:02.661 { 00:17:02.661 "name": "pt1", 00:17:02.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.661 "is_configured": true, 00:17:02.661 "data_offset": 2048, 00:17:02.661 "data_size": 63488 00:17:02.661 }, 00:17:02.661 { 00:17:02.661 "name": null, 00:17:02.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.661 "is_configured": false, 00:17:02.661 "data_offset": 0, 00:17:02.661 "data_size": 63488 00:17:02.661 }, 00:17:02.661 { 00:17:02.661 "name": null, 00:17:02.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.661 "is_configured": false, 00:17:02.661 "data_offset": 2048, 00:17:02.661 "data_size": 63488 00:17:02.661 }, 00:17:02.661 { 00:17:02.661 "name": null, 00:17:02.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.661 "is_configured": false, 00:17:02.661 "data_offset": 2048, 00:17:02.661 "data_size": 63488 00:17:02.661 } 00:17:02.661 ] 00:17:02.661 }' 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.661 17:51:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 [2024-11-20 17:51:30.207284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.231 [2024-11-20 17:51:30.207381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.231 [2024-11-20 17:51:30.207415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:03.231 [2024-11-20 17:51:30.207441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.231 [2024-11-20 17:51:30.207896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.231 [2024-11-20 17:51:30.207950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.231 [2024-11-20 17:51:30.208061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.231 [2024-11-20 17:51:30.208110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.231 pt2 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 [2024-11-20 17:51:30.219272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.231 [2024-11-20 17:51:30.219349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.231 [2024-11-20 17:51:30.219387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:03.231 [2024-11-20 17:51:30.219416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.231 [2024-11-20 17:51:30.219793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.231 [2024-11-20 17:51:30.219845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.231 [2024-11-20 17:51:30.219929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:03.231 [2024-11-20 17:51:30.219980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.231 pt3 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 [2024-11-20 17:51:30.231220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:03.231 [2024-11-20 17:51:30.231257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.231 [2024-11-20 17:51:30.231272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:03.231 [2024-11-20 17:51:30.231279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.231 [2024-11-20 17:51:30.231643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.231 [2024-11-20 17:51:30.231659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:03.231 [2024-11-20 17:51:30.231713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:03.231 [2024-11-20 17:51:30.231732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:03.231 [2024-11-20 17:51:30.231860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.231 [2024-11-20 17:51:30.231868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.231 [2024-11-20 17:51:30.232113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:03.231 [2024-11-20 17:51:30.238788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.231 pt4 00:17:03.231 [2024-11-20 17:51:30.238853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:03.231 [2024-11-20 17:51:30.239065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.231 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.232 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.232 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.232 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.232 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.232 "name": "raid_bdev1", 00:17:03.232 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:03.232 "strip_size_kb": 64, 00:17:03.232 "state": "online", 00:17:03.232 "raid_level": "raid5f", 00:17:03.232 "superblock": true, 00:17:03.232 "num_base_bdevs": 4, 00:17:03.232 "num_base_bdevs_discovered": 4, 00:17:03.232 "num_base_bdevs_operational": 4, 00:17:03.232 "base_bdevs_list": [ 00:17:03.232 { 00:17:03.232 "name": "pt1", 00:17:03.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.232 "is_configured": true, 00:17:03.232 "data_offset": 2048, 00:17:03.232 "data_size": 63488 00:17:03.232 }, 00:17:03.232 { 00:17:03.232 "name": "pt2", 00:17:03.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.232 "is_configured": true, 00:17:03.232 "data_offset": 2048, 00:17:03.232 "data_size": 63488 00:17:03.232 }, 00:17:03.232 { 00:17:03.232 "name": "pt3", 00:17:03.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.232 "is_configured": true, 00:17:03.232 "data_offset": 2048, 00:17:03.232 "data_size": 63488 00:17:03.232 }, 00:17:03.232 { 00:17:03.232 "name": "pt4", 00:17:03.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.232 "is_configured": true, 00:17:03.232 "data_offset": 2048, 00:17:03.232 "data_size": 63488 00:17:03.232 } 00:17:03.232 ] 00:17:03.232 }' 00:17:03.232 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.232 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.802 [2024-11-20 17:51:30.707958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.802 "name": "raid_bdev1", 00:17:03.802 "aliases": [ 00:17:03.802 "c0d59e8e-6d91-4f45-b328-c087a0aa727b" 00:17:03.802 ], 00:17:03.802 "product_name": "Raid Volume", 00:17:03.802 "block_size": 512, 00:17:03.802 "num_blocks": 190464, 00:17:03.802 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:03.802 "assigned_rate_limits": { 00:17:03.802 "rw_ios_per_sec": 0, 00:17:03.802 "rw_mbytes_per_sec": 0, 00:17:03.802 "r_mbytes_per_sec": 0, 00:17:03.802 "w_mbytes_per_sec": 0 00:17:03.802 }, 00:17:03.802 "claimed": false, 00:17:03.802 "zoned": false, 00:17:03.802 "supported_io_types": { 00:17:03.802 "read": true, 00:17:03.802 "write": true, 00:17:03.802 "unmap": false, 00:17:03.802 "flush": false, 00:17:03.802 "reset": true, 00:17:03.802 "nvme_admin": false, 00:17:03.802 "nvme_io": false, 00:17:03.802 "nvme_io_md": false, 00:17:03.802 "write_zeroes": true, 00:17:03.802 "zcopy": false, 00:17:03.802 "get_zone_info": false, 00:17:03.802 "zone_management": false, 00:17:03.802 "zone_append": false, 00:17:03.802 "compare": false, 00:17:03.802 "compare_and_write": false, 00:17:03.802 "abort": false, 00:17:03.802 "seek_hole": false, 00:17:03.802 "seek_data": false, 00:17:03.802 "copy": false, 00:17:03.802 "nvme_iov_md": false 00:17:03.802 }, 00:17:03.802 "driver_specific": { 00:17:03.802 "raid": { 00:17:03.802 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:03.802 "strip_size_kb": 64, 00:17:03.802 "state": "online", 00:17:03.802 "raid_level": "raid5f", 00:17:03.802 "superblock": true, 00:17:03.802 "num_base_bdevs": 4, 00:17:03.802 "num_base_bdevs_discovered": 4, 00:17:03.802 "num_base_bdevs_operational": 4, 00:17:03.802 "base_bdevs_list": [ 00:17:03.802 { 00:17:03.802 "name": "pt1", 00:17:03.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 2048, 00:17:03.802 "data_size": 63488 00:17:03.802 }, 00:17:03.802 { 00:17:03.802 "name": "pt2", 00:17:03.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 2048, 00:17:03.802 "data_size": 63488 00:17:03.802 }, 00:17:03.802 { 00:17:03.802 "name": "pt3", 00:17:03.802 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 2048, 00:17:03.802 "data_size": 63488 00:17:03.802 }, 00:17:03.802 { 00:17:03.802 "name": "pt4", 00:17:03.802 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 2048, 00:17:03.802 "data_size": 63488 00:17:03.802 } 00:17:03.802 ] 00:17:03.802 } 00:17:03.802 } 00:17:03.802 }' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:03.802 pt2 00:17:03.802 pt3 00:17:03.802 pt4' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.802 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.063 17:51:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:04.063 [2024-11-20 17:51:31.039316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c0d59e8e-6d91-4f45-b328-c087a0aa727b '!=' c0d59e8e-6d91-4f45-b328-c087a0aa727b ']' 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.063 [2024-11-20 17:51:31.075166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.063 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.064 "name": "raid_bdev1", 00:17:04.064 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:04.064 "strip_size_kb": 64, 00:17:04.064 "state": "online", 00:17:04.064 "raid_level": "raid5f", 00:17:04.064 "superblock": true, 00:17:04.064 "num_base_bdevs": 4, 00:17:04.064 "num_base_bdevs_discovered": 3, 00:17:04.064 "num_base_bdevs_operational": 3, 00:17:04.064 "base_bdevs_list": [ 00:17:04.064 { 00:17:04.064 "name": null, 00:17:04.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.064 "is_configured": false, 00:17:04.064 "data_offset": 0, 00:17:04.064 "data_size": 63488 00:17:04.064 }, 00:17:04.064 { 00:17:04.064 "name": "pt2", 00:17:04.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.064 "is_configured": true, 00:17:04.064 "data_offset": 2048, 00:17:04.064 "data_size": 63488 00:17:04.064 }, 00:17:04.064 { 00:17:04.064 "name": "pt3", 00:17:04.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.064 "is_configured": true, 00:17:04.064 "data_offset": 2048, 00:17:04.064 "data_size": 63488 00:17:04.064 }, 00:17:04.064 { 00:17:04.064 "name": "pt4", 00:17:04.064 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.064 "is_configured": true, 00:17:04.064 "data_offset": 2048, 00:17:04.064 "data_size": 63488 00:17:04.064 } 00:17:04.064 ] 00:17:04.064 }' 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.064 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.324 [2024-11-20 17:51:31.482444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.324 [2024-11-20 17:51:31.482528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.324 [2024-11-20 17:51:31.482637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.324 [2024-11-20 17:51:31.482754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.324 [2024-11-20 17:51:31.482802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.324 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.583 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.584 [2024-11-20 17:51:31.578246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.584 [2024-11-20 17:51:31.578332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.584 [2024-11-20 17:51:31.578357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:04.584 [2024-11-20 17:51:31.578379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.584 [2024-11-20 17:51:31.580925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.584 [2024-11-20 17:51:31.580960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.584 [2024-11-20 17:51:31.581063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.584 [2024-11-20 17:51:31.581125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.584 pt2 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.584 "name": "raid_bdev1", 00:17:04.584 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:04.584 "strip_size_kb": 64, 00:17:04.584 "state": "configuring", 00:17:04.584 "raid_level": "raid5f", 00:17:04.584 "superblock": true, 00:17:04.584 "num_base_bdevs": 4, 00:17:04.584 "num_base_bdevs_discovered": 1, 00:17:04.584 "num_base_bdevs_operational": 3, 00:17:04.584 "base_bdevs_list": [ 00:17:04.584 { 00:17:04.584 "name": null, 00:17:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.584 "is_configured": false, 00:17:04.584 "data_offset": 2048, 00:17:04.584 "data_size": 63488 00:17:04.584 }, 00:17:04.584 { 00:17:04.584 "name": "pt2", 00:17:04.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.584 "is_configured": true, 00:17:04.584 "data_offset": 2048, 00:17:04.584 "data_size": 63488 00:17:04.584 }, 00:17:04.584 { 00:17:04.584 "name": null, 00:17:04.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.584 "is_configured": false, 00:17:04.584 "data_offset": 2048, 00:17:04.584 "data_size": 63488 00:17:04.584 }, 00:17:04.584 { 00:17:04.584 "name": null, 00:17:04.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.584 "is_configured": false, 00:17:04.584 "data_offset": 2048, 00:17:04.584 "data_size": 63488 00:17:04.584 } 00:17:04.584 ] 00:17:04.584 }' 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.584 17:51:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.843 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:04.843 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:04.843 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:04.843 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.843 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.843 [2024-11-20 17:51:32.013501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:04.843 [2024-11-20 17:51:32.013616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.843 [2024-11-20 17:51:32.013672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:04.843 [2024-11-20 17:51:32.013703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.843 [2024-11-20 17:51:32.014173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.843 [2024-11-20 17:51:32.014229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:04.843 [2024-11-20 17:51:32.014339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:04.843 [2024-11-20 17:51:32.014388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:04.843 pt3 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.103 "name": "raid_bdev1", 00:17:05.103 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:05.103 "strip_size_kb": 64, 00:17:05.103 "state": "configuring", 00:17:05.103 "raid_level": "raid5f", 00:17:05.103 "superblock": true, 00:17:05.103 "num_base_bdevs": 4, 00:17:05.103 "num_base_bdevs_discovered": 2, 00:17:05.103 "num_base_bdevs_operational": 3, 00:17:05.103 "base_bdevs_list": [ 00:17:05.103 { 00:17:05.103 "name": null, 00:17:05.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.103 "is_configured": false, 00:17:05.103 "data_offset": 2048, 00:17:05.103 "data_size": 63488 00:17:05.103 }, 00:17:05.103 { 00:17:05.103 "name": "pt2", 00:17:05.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.103 "is_configured": true, 00:17:05.103 "data_offset": 2048, 00:17:05.103 "data_size": 63488 00:17:05.103 }, 00:17:05.103 { 00:17:05.103 "name": "pt3", 00:17:05.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.103 "is_configured": true, 00:17:05.103 "data_offset": 2048, 00:17:05.103 "data_size": 63488 00:17:05.103 }, 00:17:05.103 { 00:17:05.103 "name": null, 00:17:05.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.103 "is_configured": false, 00:17:05.103 "data_offset": 2048, 00:17:05.103 "data_size": 63488 00:17:05.103 } 00:17:05.103 ] 00:17:05.103 }' 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.103 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.363 [2024-11-20 17:51:32.488904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:05.363 [2024-11-20 17:51:32.489070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.363 [2024-11-20 17:51:32.489100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:05.363 [2024-11-20 17:51:32.489111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.363 [2024-11-20 17:51:32.489622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.363 [2024-11-20 17:51:32.489646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:05.363 [2024-11-20 17:51:32.489741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:05.363 [2024-11-20 17:51:32.489771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:05.363 [2024-11-20 17:51:32.489919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:05.363 [2024-11-20 17:51:32.489935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:05.363 [2024-11-20 17:51:32.490220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:05.363 [2024-11-20 17:51:32.497315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:05.363 [2024-11-20 17:51:32.497381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:05.363 [2024-11-20 17:51:32.497748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.363 pt4 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.363 "name": "raid_bdev1", 00:17:05.363 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:05.363 "strip_size_kb": 64, 00:17:05.363 "state": "online", 00:17:05.363 "raid_level": "raid5f", 00:17:05.363 "superblock": true, 00:17:05.363 "num_base_bdevs": 4, 00:17:05.363 "num_base_bdevs_discovered": 3, 00:17:05.363 "num_base_bdevs_operational": 3, 00:17:05.363 "base_bdevs_list": [ 00:17:05.363 { 00:17:05.363 "name": null, 00:17:05.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.363 "is_configured": false, 00:17:05.363 "data_offset": 2048, 00:17:05.363 "data_size": 63488 00:17:05.363 }, 00:17:05.363 { 00:17:05.363 "name": "pt2", 00:17:05.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.363 "is_configured": true, 00:17:05.363 "data_offset": 2048, 00:17:05.363 "data_size": 63488 00:17:05.363 }, 00:17:05.363 { 00:17:05.363 "name": "pt3", 00:17:05.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.363 "is_configured": true, 00:17:05.363 "data_offset": 2048, 00:17:05.363 "data_size": 63488 00:17:05.363 }, 00:17:05.363 { 00:17:05.363 "name": "pt4", 00:17:05.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.363 "is_configured": true, 00:17:05.363 "data_offset": 2048, 00:17:05.363 "data_size": 63488 00:17:05.363 } 00:17:05.363 ] 00:17:05.363 }' 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.363 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 [2024-11-20 17:51:32.958830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.933 [2024-11-20 17:51:32.958899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.933 [2024-11-20 17:51:32.958995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.933 [2024-11-20 17:51:32.959143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.933 [2024-11-20 17:51:32.959194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:05.933 17:51:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 [2024-11-20 17:51:33.030701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.933 [2024-11-20 17:51:33.030810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.933 [2024-11-20 17:51:33.030841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:05.933 [2024-11-20 17:51:33.030857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.933 [2024-11-20 17:51:33.033488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.933 [2024-11-20 17:51:33.033534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.933 [2024-11-20 17:51:33.033626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:05.933 [2024-11-20 17:51:33.033681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.933 [2024-11-20 17:51:33.033835] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:05.933 [2024-11-20 17:51:33.033910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.933 [2024-11-20 17:51:33.033931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:05.933 [2024-11-20 17:51:33.034032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.933 [2024-11-20 17:51:33.034174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.933 pt1 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.933 "name": "raid_bdev1", 00:17:05.933 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:05.933 "strip_size_kb": 64, 00:17:05.933 "state": "configuring", 00:17:05.933 "raid_level": "raid5f", 00:17:05.933 "superblock": true, 00:17:05.933 "num_base_bdevs": 4, 00:17:05.933 "num_base_bdevs_discovered": 2, 00:17:05.933 "num_base_bdevs_operational": 3, 00:17:05.933 "base_bdevs_list": [ 00:17:05.933 { 00:17:05.933 "name": null, 00:17:05.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.933 "is_configured": false, 00:17:05.933 "data_offset": 2048, 00:17:05.933 "data_size": 63488 00:17:05.933 }, 00:17:05.933 { 00:17:05.933 "name": "pt2", 00:17:05.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.933 "is_configured": true, 00:17:05.933 "data_offset": 2048, 00:17:05.933 "data_size": 63488 00:17:05.933 }, 00:17:05.933 { 00:17:05.933 "name": "pt3", 00:17:05.933 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.933 "is_configured": true, 00:17:05.933 "data_offset": 2048, 00:17:05.933 "data_size": 63488 00:17:05.933 }, 00:17:05.933 { 00:17:05.933 "name": null, 00:17:05.933 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.933 "is_configured": false, 00:17:05.933 "data_offset": 2048, 00:17:05.933 "data_size": 63488 00:17:05.933 } 00:17:05.933 ] 00:17:05.933 }' 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.933 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.504 [2024-11-20 17:51:33.529854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:06.504 [2024-11-20 17:51:33.529956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.504 [2024-11-20 17:51:33.529997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:06.504 [2024-11-20 17:51:33.530038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.504 [2024-11-20 17:51:33.530499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.504 [2024-11-20 17:51:33.530560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:06.504 [2024-11-20 17:51:33.530665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:06.504 [2024-11-20 17:51:33.530717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:06.504 [2024-11-20 17:51:33.530875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:06.504 [2024-11-20 17:51:33.530911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:06.504 [2024-11-20 17:51:33.531218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:06.504 [2024-11-20 17:51:33.538598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:06.504 pt4 00:17:06.504 [2024-11-20 17:51:33.538659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:06.504 [2024-11-20 17:51:33.538951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.504 "name": "raid_bdev1", 00:17:06.504 "uuid": "c0d59e8e-6d91-4f45-b328-c087a0aa727b", 00:17:06.504 "strip_size_kb": 64, 00:17:06.504 "state": "online", 00:17:06.504 "raid_level": "raid5f", 00:17:06.504 "superblock": true, 00:17:06.504 "num_base_bdevs": 4, 00:17:06.504 "num_base_bdevs_discovered": 3, 00:17:06.504 "num_base_bdevs_operational": 3, 00:17:06.504 "base_bdevs_list": [ 00:17:06.504 { 00:17:06.504 "name": null, 00:17:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.504 "is_configured": false, 00:17:06.504 "data_offset": 2048, 00:17:06.504 "data_size": 63488 00:17:06.504 }, 00:17:06.504 { 00:17:06.504 "name": "pt2", 00:17:06.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.504 "is_configured": true, 00:17:06.504 "data_offset": 2048, 00:17:06.504 "data_size": 63488 00:17:06.504 }, 00:17:06.504 { 00:17:06.504 "name": "pt3", 00:17:06.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.504 "is_configured": true, 00:17:06.504 "data_offset": 2048, 00:17:06.504 "data_size": 63488 00:17:06.504 }, 00:17:06.504 { 00:17:06.504 "name": "pt4", 00:17:06.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.504 "is_configured": true, 00:17:06.504 "data_offset": 2048, 00:17:06.504 "data_size": 63488 00:17:06.504 } 00:17:06.504 ] 00:17:06.504 }' 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.504 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.074 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:07.074 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.074 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.074 17:51:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:07.074 17:51:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:07.074 [2024-11-20 17:51:34.032004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c0d59e8e-6d91-4f45-b328-c087a0aa727b '!=' c0d59e8e-6d91-4f45-b328-c087a0aa727b ']' 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84579 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84579 ']' 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84579 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84579 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.074 killing process with pid 84579 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84579' 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84579 00:17:07.074 [2024-11-20 17:51:34.113458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.074 [2024-11-20 17:51:34.113547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.074 [2024-11-20 17:51:34.113628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.074 [2024-11-20 17:51:34.113642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:07.074 17:51:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84579 00:17:07.644 [2024-11-20 17:51:34.527692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.584 17:51:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:08.584 00:17:08.584 real 0m8.695s 00:17:08.584 user 0m13.467s 00:17:08.584 sys 0m1.699s 00:17:08.584 17:51:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.584 ************************************ 00:17:08.584 END TEST raid5f_superblock_test 00:17:08.584 ************************************ 00:17:08.584 17:51:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.845 17:51:35 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:08.845 17:51:35 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:08.845 17:51:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:08.845 17:51:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.845 17:51:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.845 ************************************ 00:17:08.845 START TEST raid5f_rebuild_test 00:17:08.845 ************************************ 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85070 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85070 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85070 ']' 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.845 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.846 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.846 17:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.846 [2024-11-20 17:51:35.889376] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:08.846 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:08.846 Zero copy mechanism will not be used. 00:17:08.846 [2024-11-20 17:51:35.889574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85070 ] 00:17:09.106 [2024-11-20 17:51:36.066539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.106 [2024-11-20 17:51:36.199629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.366 [2024-11-20 17:51:36.425271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.366 [2024-11-20 17:51:36.425380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.627 BaseBdev1_malloc 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.627 [2024-11-20 17:51:36.750260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:09.627 [2024-11-20 17:51:36.750368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.627 [2024-11-20 17:51:36.750409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:09.627 [2024-11-20 17:51:36.750441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.627 [2024-11-20 17:51:36.752809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.627 [2024-11-20 17:51:36.752882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:09.627 BaseBdev1 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.627 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 BaseBdev2_malloc 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 [2024-11-20 17:51:36.809778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:09.888 [2024-11-20 17:51:36.809840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.888 [2024-11-20 17:51:36.809865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:09.888 [2024-11-20 17:51:36.809876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.888 [2024-11-20 17:51:36.812299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.888 [2024-11-20 17:51:36.812334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:09.888 BaseBdev2 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 BaseBdev3_malloc 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 [2024-11-20 17:51:36.901040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:09.888 [2024-11-20 17:51:36.901092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.888 [2024-11-20 17:51:36.901116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:09.888 [2024-11-20 17:51:36.901129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.888 [2024-11-20 17:51:36.903494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.888 [2024-11-20 17:51:36.903535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:09.888 BaseBdev3 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 BaseBdev4_malloc 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 [2024-11-20 17:51:36.959268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:09.888 [2024-11-20 17:51:36.959323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.888 [2024-11-20 17:51:36.959344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:09.888 [2024-11-20 17:51:36.959355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.888 [2024-11-20 17:51:36.961663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.888 [2024-11-20 17:51:36.961702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:09.888 BaseBdev4 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 spare_malloc 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 spare_delay 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 [2024-11-20 17:51:37.031913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:09.888 [2024-11-20 17:51:37.031962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.888 [2024-11-20 17:51:37.031979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:09.888 [2024-11-20 17:51:37.031990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.888 [2024-11-20 17:51:37.034366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.888 [2024-11-20 17:51:37.034401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:09.888 spare 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.888 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 [2024-11-20 17:51:37.043944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.888 [2024-11-20 17:51:37.046069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.889 [2024-11-20 17:51:37.046129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.889 [2024-11-20 17:51:37.046180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.889 [2024-11-20 17:51:37.046266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:09.889 [2024-11-20 17:51:37.046278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:09.889 [2024-11-20 17:51:37.046541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:09.889 [2024-11-20 17:51:37.053671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:09.889 [2024-11-20 17:51:37.053728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:09.889 [2024-11-20 17:51:37.053970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.889 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.149 "name": "raid_bdev1", 00:17:10.149 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:10.149 "strip_size_kb": 64, 00:17:10.149 "state": "online", 00:17:10.149 "raid_level": "raid5f", 00:17:10.149 "superblock": false, 00:17:10.149 "num_base_bdevs": 4, 00:17:10.149 "num_base_bdevs_discovered": 4, 00:17:10.149 "num_base_bdevs_operational": 4, 00:17:10.149 "base_bdevs_list": [ 00:17:10.149 { 00:17:10.149 "name": "BaseBdev1", 00:17:10.149 "uuid": "d18d5cc5-6304-59da-96ab-192d161acfd2", 00:17:10.149 "is_configured": true, 00:17:10.149 "data_offset": 0, 00:17:10.149 "data_size": 65536 00:17:10.149 }, 00:17:10.149 { 00:17:10.149 "name": "BaseBdev2", 00:17:10.149 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:10.149 "is_configured": true, 00:17:10.149 "data_offset": 0, 00:17:10.149 "data_size": 65536 00:17:10.149 }, 00:17:10.149 { 00:17:10.149 "name": "BaseBdev3", 00:17:10.149 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:10.149 "is_configured": true, 00:17:10.149 "data_offset": 0, 00:17:10.149 "data_size": 65536 00:17:10.149 }, 00:17:10.149 { 00:17:10.149 "name": "BaseBdev4", 00:17:10.149 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:10.149 "is_configured": true, 00:17:10.149 "data_offset": 0, 00:17:10.149 "data_size": 65536 00:17:10.149 } 00:17:10.149 ] 00:17:10.149 }' 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.149 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.425 [2024-11-20 17:51:37.515094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:10.425 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.426 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:10.690 [2024-11-20 17:51:37.774536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:10.690 /dev/nbd0 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.690 1+0 records in 00:17:10.690 1+0 records out 00:17:10.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325338 s, 12.6 MB/s 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:10.690 17:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:11.260 512+0 records in 00:17:11.260 512+0 records out 00:17:11.260 100663296 bytes (101 MB, 96 MiB) copied, 0.553844 s, 182 MB/s 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.260 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.520 [2024-11-20 17:51:38.629796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.520 [2024-11-20 17:51:38.651718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.520 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.780 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.780 "name": "raid_bdev1", 00:17:11.780 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:11.780 "strip_size_kb": 64, 00:17:11.780 "state": "online", 00:17:11.780 "raid_level": "raid5f", 00:17:11.780 "superblock": false, 00:17:11.780 "num_base_bdevs": 4, 00:17:11.780 "num_base_bdevs_discovered": 3, 00:17:11.780 "num_base_bdevs_operational": 3, 00:17:11.780 "base_bdevs_list": [ 00:17:11.780 { 00:17:11.780 "name": null, 00:17:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.780 "is_configured": false, 00:17:11.780 "data_offset": 0, 00:17:11.780 "data_size": 65536 00:17:11.780 }, 00:17:11.780 { 00:17:11.780 "name": "BaseBdev2", 00:17:11.780 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:11.780 "is_configured": true, 00:17:11.780 "data_offset": 0, 00:17:11.780 "data_size": 65536 00:17:11.780 }, 00:17:11.780 { 00:17:11.780 "name": "BaseBdev3", 00:17:11.780 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:11.780 "is_configured": true, 00:17:11.780 "data_offset": 0, 00:17:11.780 "data_size": 65536 00:17:11.780 }, 00:17:11.780 { 00:17:11.780 "name": "BaseBdev4", 00:17:11.780 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:11.780 "is_configured": true, 00:17:11.780 "data_offset": 0, 00:17:11.780 "data_size": 65536 00:17:11.780 } 00:17:11.780 ] 00:17:11.780 }' 00:17:11.780 17:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.780 17:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.039 17:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.039 17:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.039 17:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.039 [2024-11-20 17:51:39.094901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.039 [2024-11-20 17:51:39.110285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:12.039 17:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.039 17:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:12.039 [2024-11-20 17:51:39.119718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.978 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.238 "name": "raid_bdev1", 00:17:13.238 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:13.238 "strip_size_kb": 64, 00:17:13.238 "state": "online", 00:17:13.238 "raid_level": "raid5f", 00:17:13.238 "superblock": false, 00:17:13.238 "num_base_bdevs": 4, 00:17:13.238 "num_base_bdevs_discovered": 4, 00:17:13.238 "num_base_bdevs_operational": 4, 00:17:13.238 "process": { 00:17:13.238 "type": "rebuild", 00:17:13.238 "target": "spare", 00:17:13.238 "progress": { 00:17:13.238 "blocks": 19200, 00:17:13.238 "percent": 9 00:17:13.238 } 00:17:13.238 }, 00:17:13.238 "base_bdevs_list": [ 00:17:13.238 { 00:17:13.238 "name": "spare", 00:17:13.238 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:13.238 "is_configured": true, 00:17:13.238 "data_offset": 0, 00:17:13.238 "data_size": 65536 00:17:13.238 }, 00:17:13.238 { 00:17:13.238 "name": "BaseBdev2", 00:17:13.238 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:13.238 "is_configured": true, 00:17:13.238 "data_offset": 0, 00:17:13.238 "data_size": 65536 00:17:13.238 }, 00:17:13.238 { 00:17:13.238 "name": "BaseBdev3", 00:17:13.238 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:13.238 "is_configured": true, 00:17:13.238 "data_offset": 0, 00:17:13.238 "data_size": 65536 00:17:13.238 }, 00:17:13.238 { 00:17:13.238 "name": "BaseBdev4", 00:17:13.238 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:13.238 "is_configured": true, 00:17:13.238 "data_offset": 0, 00:17:13.238 "data_size": 65536 00:17:13.238 } 00:17:13.238 ] 00:17:13.238 }' 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.238 [2024-11-20 17:51:40.270634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.238 [2024-11-20 17:51:40.326692] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.238 [2024-11-20 17:51:40.326754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.238 [2024-11-20 17:51:40.326771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.238 [2024-11-20 17:51:40.326786] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.238 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.498 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.498 "name": "raid_bdev1", 00:17:13.498 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:13.498 "strip_size_kb": 64, 00:17:13.498 "state": "online", 00:17:13.498 "raid_level": "raid5f", 00:17:13.498 "superblock": false, 00:17:13.498 "num_base_bdevs": 4, 00:17:13.498 "num_base_bdevs_discovered": 3, 00:17:13.498 "num_base_bdevs_operational": 3, 00:17:13.498 "base_bdevs_list": [ 00:17:13.498 { 00:17:13.498 "name": null, 00:17:13.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.498 "is_configured": false, 00:17:13.498 "data_offset": 0, 00:17:13.498 "data_size": 65536 00:17:13.498 }, 00:17:13.498 { 00:17:13.498 "name": "BaseBdev2", 00:17:13.498 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:13.498 "is_configured": true, 00:17:13.498 "data_offset": 0, 00:17:13.498 "data_size": 65536 00:17:13.498 }, 00:17:13.498 { 00:17:13.498 "name": "BaseBdev3", 00:17:13.498 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:13.498 "is_configured": true, 00:17:13.498 "data_offset": 0, 00:17:13.498 "data_size": 65536 00:17:13.498 }, 00:17:13.498 { 00:17:13.498 "name": "BaseBdev4", 00:17:13.498 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:13.498 "is_configured": true, 00:17:13.498 "data_offset": 0, 00:17:13.498 "data_size": 65536 00:17:13.498 } 00:17:13.498 ] 00:17:13.498 }' 00:17:13.498 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.498 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.759 "name": "raid_bdev1", 00:17:13.759 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:13.759 "strip_size_kb": 64, 00:17:13.759 "state": "online", 00:17:13.759 "raid_level": "raid5f", 00:17:13.759 "superblock": false, 00:17:13.759 "num_base_bdevs": 4, 00:17:13.759 "num_base_bdevs_discovered": 3, 00:17:13.759 "num_base_bdevs_operational": 3, 00:17:13.759 "base_bdevs_list": [ 00:17:13.759 { 00:17:13.759 "name": null, 00:17:13.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.759 "is_configured": false, 00:17:13.759 "data_offset": 0, 00:17:13.759 "data_size": 65536 00:17:13.759 }, 00:17:13.759 { 00:17:13.759 "name": "BaseBdev2", 00:17:13.759 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:13.759 "is_configured": true, 00:17:13.759 "data_offset": 0, 00:17:13.759 "data_size": 65536 00:17:13.759 }, 00:17:13.759 { 00:17:13.759 "name": "BaseBdev3", 00:17:13.759 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:13.759 "is_configured": true, 00:17:13.759 "data_offset": 0, 00:17:13.759 "data_size": 65536 00:17:13.759 }, 00:17:13.759 { 00:17:13.759 "name": "BaseBdev4", 00:17:13.759 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:13.759 "is_configured": true, 00:17:13.759 "data_offset": 0, 00:17:13.759 "data_size": 65536 00:17:13.759 } 00:17:13.759 ] 00:17:13.759 }' 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.759 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.019 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.019 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.019 17:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.019 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.019 17:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.019 [2024-11-20 17:51:40.986434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.019 [2024-11-20 17:51:41.001028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:14.019 17:51:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.019 17:51:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:14.019 [2024-11-20 17:51:41.009883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.958 "name": "raid_bdev1", 00:17:14.958 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:14.958 "strip_size_kb": 64, 00:17:14.958 "state": "online", 00:17:14.958 "raid_level": "raid5f", 00:17:14.958 "superblock": false, 00:17:14.958 "num_base_bdevs": 4, 00:17:14.958 "num_base_bdevs_discovered": 4, 00:17:14.958 "num_base_bdevs_operational": 4, 00:17:14.958 "process": { 00:17:14.958 "type": "rebuild", 00:17:14.958 "target": "spare", 00:17:14.958 "progress": { 00:17:14.958 "blocks": 19200, 00:17:14.958 "percent": 9 00:17:14.958 } 00:17:14.958 }, 00:17:14.958 "base_bdevs_list": [ 00:17:14.958 { 00:17:14.958 "name": "spare", 00:17:14.958 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:14.958 "is_configured": true, 00:17:14.958 "data_offset": 0, 00:17:14.958 "data_size": 65536 00:17:14.958 }, 00:17:14.958 { 00:17:14.958 "name": "BaseBdev2", 00:17:14.958 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:14.958 "is_configured": true, 00:17:14.958 "data_offset": 0, 00:17:14.958 "data_size": 65536 00:17:14.958 }, 00:17:14.958 { 00:17:14.958 "name": "BaseBdev3", 00:17:14.958 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:14.958 "is_configured": true, 00:17:14.958 "data_offset": 0, 00:17:14.958 "data_size": 65536 00:17:14.958 }, 00:17:14.958 { 00:17:14.958 "name": "BaseBdev4", 00:17:14.958 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:14.958 "is_configured": true, 00:17:14.958 "data_offset": 0, 00:17:14.958 "data_size": 65536 00:17:14.958 } 00:17:14.958 ] 00:17:14.958 }' 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.958 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=631 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.222 "name": "raid_bdev1", 00:17:15.222 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:15.222 "strip_size_kb": 64, 00:17:15.222 "state": "online", 00:17:15.222 "raid_level": "raid5f", 00:17:15.222 "superblock": false, 00:17:15.222 "num_base_bdevs": 4, 00:17:15.222 "num_base_bdevs_discovered": 4, 00:17:15.222 "num_base_bdevs_operational": 4, 00:17:15.222 "process": { 00:17:15.222 "type": "rebuild", 00:17:15.222 "target": "spare", 00:17:15.222 "progress": { 00:17:15.222 "blocks": 21120, 00:17:15.222 "percent": 10 00:17:15.222 } 00:17:15.222 }, 00:17:15.222 "base_bdevs_list": [ 00:17:15.222 { 00:17:15.222 "name": "spare", 00:17:15.222 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 0, 00:17:15.222 "data_size": 65536 00:17:15.222 }, 00:17:15.222 { 00:17:15.222 "name": "BaseBdev2", 00:17:15.222 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 0, 00:17:15.222 "data_size": 65536 00:17:15.222 }, 00:17:15.222 { 00:17:15.222 "name": "BaseBdev3", 00:17:15.222 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 0, 00:17:15.222 "data_size": 65536 00:17:15.222 }, 00:17:15.222 { 00:17:15.222 "name": "BaseBdev4", 00:17:15.222 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 0, 00:17:15.222 "data_size": 65536 00:17:15.222 } 00:17:15.222 ] 00:17:15.222 }' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.222 17:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.187 "name": "raid_bdev1", 00:17:16.187 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:16.187 "strip_size_kb": 64, 00:17:16.187 "state": "online", 00:17:16.187 "raid_level": "raid5f", 00:17:16.187 "superblock": false, 00:17:16.187 "num_base_bdevs": 4, 00:17:16.187 "num_base_bdevs_discovered": 4, 00:17:16.187 "num_base_bdevs_operational": 4, 00:17:16.187 "process": { 00:17:16.187 "type": "rebuild", 00:17:16.187 "target": "spare", 00:17:16.187 "progress": { 00:17:16.187 "blocks": 42240, 00:17:16.187 "percent": 21 00:17:16.187 } 00:17:16.187 }, 00:17:16.187 "base_bdevs_list": [ 00:17:16.187 { 00:17:16.187 "name": "spare", 00:17:16.187 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:16.187 "is_configured": true, 00:17:16.187 "data_offset": 0, 00:17:16.187 "data_size": 65536 00:17:16.187 }, 00:17:16.187 { 00:17:16.187 "name": "BaseBdev2", 00:17:16.187 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:16.187 "is_configured": true, 00:17:16.187 "data_offset": 0, 00:17:16.187 "data_size": 65536 00:17:16.187 }, 00:17:16.187 { 00:17:16.187 "name": "BaseBdev3", 00:17:16.187 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:16.187 "is_configured": true, 00:17:16.187 "data_offset": 0, 00:17:16.187 "data_size": 65536 00:17:16.187 }, 00:17:16.187 { 00:17:16.187 "name": "BaseBdev4", 00:17:16.187 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:16.187 "is_configured": true, 00:17:16.187 "data_offset": 0, 00:17:16.187 "data_size": 65536 00:17:16.187 } 00:17:16.187 ] 00:17:16.187 }' 00:17:16.187 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.446 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.446 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.446 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.446 17:51:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.387 "name": "raid_bdev1", 00:17:17.387 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:17.387 "strip_size_kb": 64, 00:17:17.387 "state": "online", 00:17:17.387 "raid_level": "raid5f", 00:17:17.387 "superblock": false, 00:17:17.387 "num_base_bdevs": 4, 00:17:17.387 "num_base_bdevs_discovered": 4, 00:17:17.387 "num_base_bdevs_operational": 4, 00:17:17.387 "process": { 00:17:17.387 "type": "rebuild", 00:17:17.387 "target": "spare", 00:17:17.387 "progress": { 00:17:17.387 "blocks": 65280, 00:17:17.387 "percent": 33 00:17:17.387 } 00:17:17.387 }, 00:17:17.387 "base_bdevs_list": [ 00:17:17.387 { 00:17:17.387 "name": "spare", 00:17:17.387 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:17.387 "is_configured": true, 00:17:17.387 "data_offset": 0, 00:17:17.387 "data_size": 65536 00:17:17.387 }, 00:17:17.387 { 00:17:17.387 "name": "BaseBdev2", 00:17:17.387 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:17.387 "is_configured": true, 00:17:17.387 "data_offset": 0, 00:17:17.387 "data_size": 65536 00:17:17.387 }, 00:17:17.387 { 00:17:17.387 "name": "BaseBdev3", 00:17:17.387 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:17.387 "is_configured": true, 00:17:17.387 "data_offset": 0, 00:17:17.387 "data_size": 65536 00:17:17.387 }, 00:17:17.387 { 00:17:17.387 "name": "BaseBdev4", 00:17:17.387 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:17.387 "is_configured": true, 00:17:17.387 "data_offset": 0, 00:17:17.387 "data_size": 65536 00:17:17.387 } 00:17:17.387 ] 00:17:17.387 }' 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.387 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.648 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.648 17:51:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.587 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.587 "name": "raid_bdev1", 00:17:18.587 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:18.587 "strip_size_kb": 64, 00:17:18.587 "state": "online", 00:17:18.587 "raid_level": "raid5f", 00:17:18.587 "superblock": false, 00:17:18.587 "num_base_bdevs": 4, 00:17:18.587 "num_base_bdevs_discovered": 4, 00:17:18.587 "num_base_bdevs_operational": 4, 00:17:18.587 "process": { 00:17:18.587 "type": "rebuild", 00:17:18.587 "target": "spare", 00:17:18.587 "progress": { 00:17:18.587 "blocks": 86400, 00:17:18.587 "percent": 43 00:17:18.587 } 00:17:18.588 }, 00:17:18.588 "base_bdevs_list": [ 00:17:18.588 { 00:17:18.588 "name": "spare", 00:17:18.588 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:18.588 "is_configured": true, 00:17:18.588 "data_offset": 0, 00:17:18.588 "data_size": 65536 00:17:18.588 }, 00:17:18.588 { 00:17:18.588 "name": "BaseBdev2", 00:17:18.588 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:18.588 "is_configured": true, 00:17:18.588 "data_offset": 0, 00:17:18.588 "data_size": 65536 00:17:18.588 }, 00:17:18.588 { 00:17:18.588 "name": "BaseBdev3", 00:17:18.588 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:18.588 "is_configured": true, 00:17:18.588 "data_offset": 0, 00:17:18.588 "data_size": 65536 00:17:18.588 }, 00:17:18.588 { 00:17:18.588 "name": "BaseBdev4", 00:17:18.588 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:18.588 "is_configured": true, 00:17:18.588 "data_offset": 0, 00:17:18.588 "data_size": 65536 00:17:18.588 } 00:17:18.588 ] 00:17:18.588 }' 00:17:18.588 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.588 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.588 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.588 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.588 17:51:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.969 "name": "raid_bdev1", 00:17:19.969 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:19.969 "strip_size_kb": 64, 00:17:19.969 "state": "online", 00:17:19.969 "raid_level": "raid5f", 00:17:19.969 "superblock": false, 00:17:19.969 "num_base_bdevs": 4, 00:17:19.969 "num_base_bdevs_discovered": 4, 00:17:19.969 "num_base_bdevs_operational": 4, 00:17:19.969 "process": { 00:17:19.969 "type": "rebuild", 00:17:19.969 "target": "spare", 00:17:19.969 "progress": { 00:17:19.969 "blocks": 109440, 00:17:19.969 "percent": 55 00:17:19.969 } 00:17:19.969 }, 00:17:19.969 "base_bdevs_list": [ 00:17:19.969 { 00:17:19.969 "name": "spare", 00:17:19.969 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 0, 00:17:19.969 "data_size": 65536 00:17:19.969 }, 00:17:19.969 { 00:17:19.969 "name": "BaseBdev2", 00:17:19.969 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 0, 00:17:19.969 "data_size": 65536 00:17:19.969 }, 00:17:19.969 { 00:17:19.969 "name": "BaseBdev3", 00:17:19.969 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 0, 00:17:19.969 "data_size": 65536 00:17:19.969 }, 00:17:19.969 { 00:17:19.969 "name": "BaseBdev4", 00:17:19.969 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 0, 00:17:19.969 "data_size": 65536 00:17:19.969 } 00:17:19.969 ] 00:17:19.969 }' 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.969 17:51:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.909 "name": "raid_bdev1", 00:17:20.909 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:20.909 "strip_size_kb": 64, 00:17:20.909 "state": "online", 00:17:20.909 "raid_level": "raid5f", 00:17:20.909 "superblock": false, 00:17:20.909 "num_base_bdevs": 4, 00:17:20.909 "num_base_bdevs_discovered": 4, 00:17:20.909 "num_base_bdevs_operational": 4, 00:17:20.909 "process": { 00:17:20.909 "type": "rebuild", 00:17:20.909 "target": "spare", 00:17:20.909 "progress": { 00:17:20.909 "blocks": 130560, 00:17:20.909 "percent": 66 00:17:20.909 } 00:17:20.909 }, 00:17:20.909 "base_bdevs_list": [ 00:17:20.909 { 00:17:20.909 "name": "spare", 00:17:20.909 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:20.909 "is_configured": true, 00:17:20.909 "data_offset": 0, 00:17:20.909 "data_size": 65536 00:17:20.909 }, 00:17:20.909 { 00:17:20.909 "name": "BaseBdev2", 00:17:20.909 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:20.909 "is_configured": true, 00:17:20.909 "data_offset": 0, 00:17:20.909 "data_size": 65536 00:17:20.909 }, 00:17:20.909 { 00:17:20.909 "name": "BaseBdev3", 00:17:20.909 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:20.909 "is_configured": true, 00:17:20.909 "data_offset": 0, 00:17:20.909 "data_size": 65536 00:17:20.909 }, 00:17:20.909 { 00:17:20.909 "name": "BaseBdev4", 00:17:20.909 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:20.909 "is_configured": true, 00:17:20.909 "data_offset": 0, 00:17:20.909 "data_size": 65536 00:17:20.909 } 00:17:20.909 ] 00:17:20.909 }' 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.909 17:51:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.909 17:51:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.909 17:51:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.290 "name": "raid_bdev1", 00:17:22.290 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:22.290 "strip_size_kb": 64, 00:17:22.290 "state": "online", 00:17:22.290 "raid_level": "raid5f", 00:17:22.290 "superblock": false, 00:17:22.290 "num_base_bdevs": 4, 00:17:22.290 "num_base_bdevs_discovered": 4, 00:17:22.290 "num_base_bdevs_operational": 4, 00:17:22.290 "process": { 00:17:22.290 "type": "rebuild", 00:17:22.290 "target": "spare", 00:17:22.290 "progress": { 00:17:22.290 "blocks": 151680, 00:17:22.290 "percent": 77 00:17:22.290 } 00:17:22.290 }, 00:17:22.290 "base_bdevs_list": [ 00:17:22.290 { 00:17:22.290 "name": "spare", 00:17:22.290 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:22.290 "is_configured": true, 00:17:22.290 "data_offset": 0, 00:17:22.290 "data_size": 65536 00:17:22.290 }, 00:17:22.290 { 00:17:22.290 "name": "BaseBdev2", 00:17:22.290 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:22.290 "is_configured": true, 00:17:22.290 "data_offset": 0, 00:17:22.290 "data_size": 65536 00:17:22.290 }, 00:17:22.290 { 00:17:22.290 "name": "BaseBdev3", 00:17:22.290 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:22.290 "is_configured": true, 00:17:22.290 "data_offset": 0, 00:17:22.290 "data_size": 65536 00:17:22.290 }, 00:17:22.290 { 00:17:22.290 "name": "BaseBdev4", 00:17:22.290 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:22.290 "is_configured": true, 00:17:22.290 "data_offset": 0, 00:17:22.290 "data_size": 65536 00:17:22.290 } 00:17:22.290 ] 00:17:22.290 }' 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.290 17:51:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.230 "name": "raid_bdev1", 00:17:23.230 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:23.230 "strip_size_kb": 64, 00:17:23.230 "state": "online", 00:17:23.230 "raid_level": "raid5f", 00:17:23.230 "superblock": false, 00:17:23.230 "num_base_bdevs": 4, 00:17:23.230 "num_base_bdevs_discovered": 4, 00:17:23.230 "num_base_bdevs_operational": 4, 00:17:23.230 "process": { 00:17:23.230 "type": "rebuild", 00:17:23.230 "target": "spare", 00:17:23.230 "progress": { 00:17:23.230 "blocks": 174720, 00:17:23.230 "percent": 88 00:17:23.230 } 00:17:23.230 }, 00:17:23.230 "base_bdevs_list": [ 00:17:23.230 { 00:17:23.230 "name": "spare", 00:17:23.230 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:23.230 "is_configured": true, 00:17:23.230 "data_offset": 0, 00:17:23.230 "data_size": 65536 00:17:23.230 }, 00:17:23.230 { 00:17:23.230 "name": "BaseBdev2", 00:17:23.230 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:23.230 "is_configured": true, 00:17:23.230 "data_offset": 0, 00:17:23.230 "data_size": 65536 00:17:23.230 }, 00:17:23.230 { 00:17:23.230 "name": "BaseBdev3", 00:17:23.230 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:23.230 "is_configured": true, 00:17:23.230 "data_offset": 0, 00:17:23.230 "data_size": 65536 00:17:23.230 }, 00:17:23.230 { 00:17:23.230 "name": "BaseBdev4", 00:17:23.230 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:23.230 "is_configured": true, 00:17:23.230 "data_offset": 0, 00:17:23.230 "data_size": 65536 00:17:23.230 } 00:17:23.230 ] 00:17:23.230 }' 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.230 17:51:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.170 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.170 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.170 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.170 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.170 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.170 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.429 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.429 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.429 17:51:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.429 17:51:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.429 [2024-11-20 17:51:51.361542] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:24.429 [2024-11-20 17:51:51.361674] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:24.429 [2024-11-20 17:51:51.361748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.429 17:51:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.429 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.429 "name": "raid_bdev1", 00:17:24.429 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:24.429 "strip_size_kb": 64, 00:17:24.429 "state": "online", 00:17:24.429 "raid_level": "raid5f", 00:17:24.429 "superblock": false, 00:17:24.429 "num_base_bdevs": 4, 00:17:24.429 "num_base_bdevs_discovered": 4, 00:17:24.429 "num_base_bdevs_operational": 4, 00:17:24.429 "process": { 00:17:24.429 "type": "rebuild", 00:17:24.429 "target": "spare", 00:17:24.429 "progress": { 00:17:24.429 "blocks": 195840, 00:17:24.429 "percent": 99 00:17:24.429 } 00:17:24.429 }, 00:17:24.429 "base_bdevs_list": [ 00:17:24.429 { 00:17:24.429 "name": "spare", 00:17:24.429 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:24.429 "is_configured": true, 00:17:24.430 "data_offset": 0, 00:17:24.430 "data_size": 65536 00:17:24.430 }, 00:17:24.430 { 00:17:24.430 "name": "BaseBdev2", 00:17:24.430 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:24.430 "is_configured": true, 00:17:24.430 "data_offset": 0, 00:17:24.430 "data_size": 65536 00:17:24.430 }, 00:17:24.430 { 00:17:24.430 "name": "BaseBdev3", 00:17:24.430 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:24.430 "is_configured": true, 00:17:24.430 "data_offset": 0, 00:17:24.430 "data_size": 65536 00:17:24.430 }, 00:17:24.430 { 00:17:24.430 "name": "BaseBdev4", 00:17:24.430 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:24.430 "is_configured": true, 00:17:24.430 "data_offset": 0, 00:17:24.430 "data_size": 65536 00:17:24.430 } 00:17:24.430 ] 00:17:24.430 }' 00:17:24.430 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.430 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.430 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.430 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.430 17:51:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.368 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.628 "name": "raid_bdev1", 00:17:25.628 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:25.628 "strip_size_kb": 64, 00:17:25.628 "state": "online", 00:17:25.628 "raid_level": "raid5f", 00:17:25.628 "superblock": false, 00:17:25.628 "num_base_bdevs": 4, 00:17:25.628 "num_base_bdevs_discovered": 4, 00:17:25.628 "num_base_bdevs_operational": 4, 00:17:25.628 "base_bdevs_list": [ 00:17:25.628 { 00:17:25.628 "name": "spare", 00:17:25.628 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 }, 00:17:25.628 { 00:17:25.628 "name": "BaseBdev2", 00:17:25.628 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 }, 00:17:25.628 { 00:17:25.628 "name": "BaseBdev3", 00:17:25.628 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 }, 00:17:25.628 { 00:17:25.628 "name": "BaseBdev4", 00:17:25.628 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 } 00:17:25.628 ] 00:17:25.628 }' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.628 "name": "raid_bdev1", 00:17:25.628 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:25.628 "strip_size_kb": 64, 00:17:25.628 "state": "online", 00:17:25.628 "raid_level": "raid5f", 00:17:25.628 "superblock": false, 00:17:25.628 "num_base_bdevs": 4, 00:17:25.628 "num_base_bdevs_discovered": 4, 00:17:25.628 "num_base_bdevs_operational": 4, 00:17:25.628 "base_bdevs_list": [ 00:17:25.628 { 00:17:25.628 "name": "spare", 00:17:25.628 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 }, 00:17:25.628 { 00:17:25.628 "name": "BaseBdev2", 00:17:25.628 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 }, 00:17:25.628 { 00:17:25.628 "name": "BaseBdev3", 00:17:25.628 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 }, 00:17:25.628 { 00:17:25.628 "name": "BaseBdev4", 00:17:25.628 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:25.628 "is_configured": true, 00:17:25.628 "data_offset": 0, 00:17:25.628 "data_size": 65536 00:17:25.628 } 00:17:25.628 ] 00:17:25.628 }' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.628 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.888 "name": "raid_bdev1", 00:17:25.888 "uuid": "a03c5969-1559-4c0b-9eff-301df601aba1", 00:17:25.888 "strip_size_kb": 64, 00:17:25.888 "state": "online", 00:17:25.888 "raid_level": "raid5f", 00:17:25.888 "superblock": false, 00:17:25.888 "num_base_bdevs": 4, 00:17:25.888 "num_base_bdevs_discovered": 4, 00:17:25.888 "num_base_bdevs_operational": 4, 00:17:25.888 "base_bdevs_list": [ 00:17:25.888 { 00:17:25.888 "name": "spare", 00:17:25.888 "uuid": "42e670bb-9f6d-5dd2-8e05-025f2bd5bcc0", 00:17:25.888 "is_configured": true, 00:17:25.888 "data_offset": 0, 00:17:25.888 "data_size": 65536 00:17:25.888 }, 00:17:25.888 { 00:17:25.888 "name": "BaseBdev2", 00:17:25.888 "uuid": "77651d06-2c24-5c4b-8941-bf67d4f47172", 00:17:25.888 "is_configured": true, 00:17:25.888 "data_offset": 0, 00:17:25.888 "data_size": 65536 00:17:25.888 }, 00:17:25.888 { 00:17:25.888 "name": "BaseBdev3", 00:17:25.888 "uuid": "529c633f-2c89-5323-9bbc-e3f5ce010a11", 00:17:25.888 "is_configured": true, 00:17:25.888 "data_offset": 0, 00:17:25.888 "data_size": 65536 00:17:25.888 }, 00:17:25.888 { 00:17:25.888 "name": "BaseBdev4", 00:17:25.888 "uuid": "ac2b2e17-a957-56f3-8a70-58c624a6b424", 00:17:25.888 "is_configured": true, 00:17:25.888 "data_offset": 0, 00:17:25.888 "data_size": 65536 00:17:25.888 } 00:17:25.888 ] 00:17:25.888 }' 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.888 17:51:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.148 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.149 [2024-11-20 17:51:53.237075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.149 [2024-11-20 17:51:53.237155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.149 [2024-11-20 17:51:53.237291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.149 [2024-11-20 17:51:53.237423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.149 [2024-11-20 17:51:53.237471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.149 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:26.408 /dev/nbd0 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.408 1+0 records in 00:17:26.408 1+0 records out 00:17:26.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291287 s, 14.1 MB/s 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.408 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:26.668 /dev/nbd1 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.668 1+0 records in 00:17:26.668 1+0 records out 00:17:26.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238671 s, 17.2 MB/s 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.668 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.927 17:51:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.186 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85070 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85070 ']' 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85070 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85070 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85070' 00:17:27.446 killing process with pid 85070 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85070 00:17:27.446 Received shutdown signal, test time was about 60.000000 seconds 00:17:27.446 00:17:27.446 Latency(us) 00:17:27.446 [2024-11-20T17:51:54.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.446 [2024-11-20T17:51:54.622Z] =================================================================================================================== 00:17:27.446 [2024-11-20T17:51:54.622Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:27.446 [2024-11-20 17:51:54.428899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.446 17:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85070 00:17:28.014 [2024-11-20 17:51:54.931115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.949 17:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:28.949 00:17:28.949 real 0m20.290s 00:17:28.949 user 0m24.134s 00:17:28.949 sys 0m2.440s 00:17:28.949 17:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.949 17:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.949 ************************************ 00:17:28.949 END TEST raid5f_rebuild_test 00:17:28.949 ************************************ 00:17:29.208 17:51:56 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:29.208 17:51:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:29.208 17:51:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.208 17:51:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.208 ************************************ 00:17:29.208 START TEST raid5f_rebuild_test_sb 00:17:29.208 ************************************ 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85586 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85586 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85586 ']' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.208 17:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.208 [2024-11-20 17:51:56.256818] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:29.208 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:29.208 Zero copy mechanism will not be used. 00:17:29.208 [2024-11-20 17:51:56.257310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85586 ] 00:17:29.467 [2024-11-20 17:51:56.449003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.467 [2024-11-20 17:51:56.577034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.757 [2024-11-20 17:51:56.795766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.757 [2024-11-20 17:51:56.795836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.016 BaseBdev1_malloc 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.016 [2024-11-20 17:51:57.126261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.016 [2024-11-20 17:51:57.126323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.016 [2024-11-20 17:51:57.126346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:30.016 [2024-11-20 17:51:57.126358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.016 [2024-11-20 17:51:57.128685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.016 [2024-11-20 17:51:57.128721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.016 BaseBdev1 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.016 BaseBdev2_malloc 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.016 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.016 [2024-11-20 17:51:57.186273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:30.016 [2024-11-20 17:51:57.186330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.016 [2024-11-20 17:51:57.186354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.016 [2024-11-20 17:51:57.186366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.016 [2024-11-20 17:51:57.188685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.016 [2024-11-20 17:51:57.188717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:30.016 BaseBdev2 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 BaseBdev3_malloc 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 [2024-11-20 17:51:57.278816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:30.276 [2024-11-20 17:51:57.278863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.276 [2024-11-20 17:51:57.278885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:30.276 [2024-11-20 17:51:57.278897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.276 [2024-11-20 17:51:57.281204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.276 [2024-11-20 17:51:57.281237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:30.276 BaseBdev3 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 BaseBdev4_malloc 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 [2024-11-20 17:51:57.339334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:30.276 [2024-11-20 17:51:57.339385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.276 [2024-11-20 17:51:57.339406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:30.276 [2024-11-20 17:51:57.339417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.276 [2024-11-20 17:51:57.341723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.276 [2024-11-20 17:51:57.341757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:30.276 BaseBdev4 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 spare_malloc 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 spare_delay 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 [2024-11-20 17:51:57.409990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.276 [2024-11-20 17:51:57.410050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.276 [2024-11-20 17:51:57.410068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:30.276 [2024-11-20 17:51:57.410079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.276 [2024-11-20 17:51:57.412348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.276 [2024-11-20 17:51:57.412380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.276 spare 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:30.276 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.277 [2024-11-20 17:51:57.422043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.277 [2024-11-20 17:51:57.424084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.277 [2024-11-20 17:51:57.424146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.277 [2024-11-20 17:51:57.424194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:30.277 [2024-11-20 17:51:57.424379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:30.277 [2024-11-20 17:51:57.424400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:30.277 [2024-11-20 17:51:57.424653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:30.277 [2024-11-20 17:51:57.431710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:30.277 [2024-11-20 17:51:57.431734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:30.277 [2024-11-20 17:51:57.431909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.277 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.534 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.534 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.534 "name": "raid_bdev1", 00:17:30.534 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:30.534 "strip_size_kb": 64, 00:17:30.534 "state": "online", 00:17:30.534 "raid_level": "raid5f", 00:17:30.534 "superblock": true, 00:17:30.534 "num_base_bdevs": 4, 00:17:30.535 "num_base_bdevs_discovered": 4, 00:17:30.535 "num_base_bdevs_operational": 4, 00:17:30.535 "base_bdevs_list": [ 00:17:30.535 { 00:17:30.535 "name": "BaseBdev1", 00:17:30.535 "uuid": "0d873ef0-e0a6-5ed9-a5c3-e52c0c9e950a", 00:17:30.535 "is_configured": true, 00:17:30.535 "data_offset": 2048, 00:17:30.535 "data_size": 63488 00:17:30.535 }, 00:17:30.535 { 00:17:30.535 "name": "BaseBdev2", 00:17:30.535 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:30.535 "is_configured": true, 00:17:30.535 "data_offset": 2048, 00:17:30.535 "data_size": 63488 00:17:30.535 }, 00:17:30.535 { 00:17:30.535 "name": "BaseBdev3", 00:17:30.535 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:30.535 "is_configured": true, 00:17:30.535 "data_offset": 2048, 00:17:30.535 "data_size": 63488 00:17:30.535 }, 00:17:30.535 { 00:17:30.535 "name": "BaseBdev4", 00:17:30.535 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:30.535 "is_configured": true, 00:17:30.535 "data_offset": 2048, 00:17:30.535 "data_size": 63488 00:17:30.535 } 00:17:30.535 ] 00:17:30.535 }' 00:17:30.535 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.535 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:30.793 [2024-11-20 17:51:57.884777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:30.793 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.051 17:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:31.051 [2024-11-20 17:51:58.148164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:31.051 /dev/nbd0 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.051 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.051 1+0 records in 00:17:31.051 1+0 records out 00:17:31.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420971 s, 9.7 MB/s 00:17:31.052 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:31.311 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:31.571 496+0 records in 00:17:31.571 496+0 records out 00:17:31.571 97517568 bytes (98 MB, 93 MiB) copied, 0.492601 s, 198 MB/s 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.571 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.830 [2024-11-20 17:51:58.941306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.830 [2024-11-20 17:51:58.959113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.830 17:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.089 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.089 "name": "raid_bdev1", 00:17:32.089 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:32.089 "strip_size_kb": 64, 00:17:32.089 "state": "online", 00:17:32.089 "raid_level": "raid5f", 00:17:32.089 "superblock": true, 00:17:32.089 "num_base_bdevs": 4, 00:17:32.089 "num_base_bdevs_discovered": 3, 00:17:32.089 "num_base_bdevs_operational": 3, 00:17:32.089 "base_bdevs_list": [ 00:17:32.089 { 00:17:32.089 "name": null, 00:17:32.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.089 "is_configured": false, 00:17:32.089 "data_offset": 0, 00:17:32.090 "data_size": 63488 00:17:32.090 }, 00:17:32.090 { 00:17:32.090 "name": "BaseBdev2", 00:17:32.090 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:32.090 "is_configured": true, 00:17:32.090 "data_offset": 2048, 00:17:32.090 "data_size": 63488 00:17:32.090 }, 00:17:32.090 { 00:17:32.090 "name": "BaseBdev3", 00:17:32.090 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:32.090 "is_configured": true, 00:17:32.090 "data_offset": 2048, 00:17:32.090 "data_size": 63488 00:17:32.090 }, 00:17:32.090 { 00:17:32.090 "name": "BaseBdev4", 00:17:32.090 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:32.090 "is_configured": true, 00:17:32.090 "data_offset": 2048, 00:17:32.090 "data_size": 63488 00:17:32.090 } 00:17:32.090 ] 00:17:32.090 }' 00:17:32.090 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.090 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.472 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.472 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.472 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.472 [2024-11-20 17:51:59.422284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.472 [2024-11-20 17:51:59.437299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:32.472 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.472 17:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:32.472 [2024-11-20 17:51:59.446360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.411 "name": "raid_bdev1", 00:17:33.411 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:33.411 "strip_size_kb": 64, 00:17:33.411 "state": "online", 00:17:33.411 "raid_level": "raid5f", 00:17:33.411 "superblock": true, 00:17:33.411 "num_base_bdevs": 4, 00:17:33.411 "num_base_bdevs_discovered": 4, 00:17:33.411 "num_base_bdevs_operational": 4, 00:17:33.411 "process": { 00:17:33.411 "type": "rebuild", 00:17:33.411 "target": "spare", 00:17:33.411 "progress": { 00:17:33.411 "blocks": 19200, 00:17:33.411 "percent": 10 00:17:33.411 } 00:17:33.411 }, 00:17:33.411 "base_bdevs_list": [ 00:17:33.411 { 00:17:33.411 "name": "spare", 00:17:33.411 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:33.411 "is_configured": true, 00:17:33.411 "data_offset": 2048, 00:17:33.411 "data_size": 63488 00:17:33.411 }, 00:17:33.411 { 00:17:33.411 "name": "BaseBdev2", 00:17:33.411 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:33.411 "is_configured": true, 00:17:33.411 "data_offset": 2048, 00:17:33.411 "data_size": 63488 00:17:33.411 }, 00:17:33.411 { 00:17:33.411 "name": "BaseBdev3", 00:17:33.411 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:33.411 "is_configured": true, 00:17:33.411 "data_offset": 2048, 00:17:33.411 "data_size": 63488 00:17:33.411 }, 00:17:33.411 { 00:17:33.411 "name": "BaseBdev4", 00:17:33.411 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:33.411 "is_configured": true, 00:17:33.411 "data_offset": 2048, 00:17:33.411 "data_size": 63488 00:17:33.411 } 00:17:33.411 ] 00:17:33.411 }' 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.411 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.672 [2024-11-20 17:52:00.601218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.672 [2024-11-20 17:52:00.653128] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:33.672 [2024-11-20 17:52:00.653190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.672 [2024-11-20 17:52:00.653206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.672 [2024-11-20 17:52:00.653216] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.672 "name": "raid_bdev1", 00:17:33.672 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:33.672 "strip_size_kb": 64, 00:17:33.672 "state": "online", 00:17:33.672 "raid_level": "raid5f", 00:17:33.672 "superblock": true, 00:17:33.672 "num_base_bdevs": 4, 00:17:33.672 "num_base_bdevs_discovered": 3, 00:17:33.672 "num_base_bdevs_operational": 3, 00:17:33.672 "base_bdevs_list": [ 00:17:33.672 { 00:17:33.672 "name": null, 00:17:33.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.672 "is_configured": false, 00:17:33.672 "data_offset": 0, 00:17:33.672 "data_size": 63488 00:17:33.672 }, 00:17:33.672 { 00:17:33.672 "name": "BaseBdev2", 00:17:33.672 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:33.672 "is_configured": true, 00:17:33.672 "data_offset": 2048, 00:17:33.672 "data_size": 63488 00:17:33.672 }, 00:17:33.672 { 00:17:33.672 "name": "BaseBdev3", 00:17:33.672 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:33.672 "is_configured": true, 00:17:33.672 "data_offset": 2048, 00:17:33.672 "data_size": 63488 00:17:33.672 }, 00:17:33.672 { 00:17:33.672 "name": "BaseBdev4", 00:17:33.672 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:33.672 "is_configured": true, 00:17:33.672 "data_offset": 2048, 00:17:33.672 "data_size": 63488 00:17:33.672 } 00:17:33.672 ] 00:17:33.672 }' 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.672 17:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.932 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.932 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.932 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.932 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.192 "name": "raid_bdev1", 00:17:34.192 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:34.192 "strip_size_kb": 64, 00:17:34.192 "state": "online", 00:17:34.192 "raid_level": "raid5f", 00:17:34.192 "superblock": true, 00:17:34.192 "num_base_bdevs": 4, 00:17:34.192 "num_base_bdevs_discovered": 3, 00:17:34.192 "num_base_bdevs_operational": 3, 00:17:34.192 "base_bdevs_list": [ 00:17:34.192 { 00:17:34.192 "name": null, 00:17:34.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.192 "is_configured": false, 00:17:34.192 "data_offset": 0, 00:17:34.192 "data_size": 63488 00:17:34.192 }, 00:17:34.192 { 00:17:34.192 "name": "BaseBdev2", 00:17:34.192 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:34.192 "is_configured": true, 00:17:34.192 "data_offset": 2048, 00:17:34.192 "data_size": 63488 00:17:34.192 }, 00:17:34.192 { 00:17:34.192 "name": "BaseBdev3", 00:17:34.192 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:34.192 "is_configured": true, 00:17:34.192 "data_offset": 2048, 00:17:34.192 "data_size": 63488 00:17:34.192 }, 00:17:34.192 { 00:17:34.192 "name": "BaseBdev4", 00:17:34.192 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:34.192 "is_configured": true, 00:17:34.192 "data_offset": 2048, 00:17:34.192 "data_size": 63488 00:17:34.192 } 00:17:34.192 ] 00:17:34.192 }' 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.192 [2024-11-20 17:52:01.244647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.192 [2024-11-20 17:52:01.259029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.192 17:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:34.192 [2024-11-20 17:52:01.267693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.130 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.390 "name": "raid_bdev1", 00:17:35.390 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:35.390 "strip_size_kb": 64, 00:17:35.390 "state": "online", 00:17:35.390 "raid_level": "raid5f", 00:17:35.390 "superblock": true, 00:17:35.390 "num_base_bdevs": 4, 00:17:35.390 "num_base_bdevs_discovered": 4, 00:17:35.390 "num_base_bdevs_operational": 4, 00:17:35.390 "process": { 00:17:35.390 "type": "rebuild", 00:17:35.390 "target": "spare", 00:17:35.390 "progress": { 00:17:35.390 "blocks": 19200, 00:17:35.390 "percent": 10 00:17:35.390 } 00:17:35.390 }, 00:17:35.390 "base_bdevs_list": [ 00:17:35.390 { 00:17:35.390 "name": "spare", 00:17:35.390 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:35.390 "is_configured": true, 00:17:35.390 "data_offset": 2048, 00:17:35.390 "data_size": 63488 00:17:35.390 }, 00:17:35.390 { 00:17:35.390 "name": "BaseBdev2", 00:17:35.390 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:35.390 "is_configured": true, 00:17:35.390 "data_offset": 2048, 00:17:35.390 "data_size": 63488 00:17:35.390 }, 00:17:35.390 { 00:17:35.390 "name": "BaseBdev3", 00:17:35.390 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:35.390 "is_configured": true, 00:17:35.390 "data_offset": 2048, 00:17:35.390 "data_size": 63488 00:17:35.390 }, 00:17:35.390 { 00:17:35.390 "name": "BaseBdev4", 00:17:35.390 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:35.390 "is_configured": true, 00:17:35.390 "data_offset": 2048, 00:17:35.390 "data_size": 63488 00:17:35.390 } 00:17:35.390 ] 00:17:35.390 }' 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:35.390 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=651 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.390 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.390 "name": "raid_bdev1", 00:17:35.390 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:35.390 "strip_size_kb": 64, 00:17:35.390 "state": "online", 00:17:35.390 "raid_level": "raid5f", 00:17:35.390 "superblock": true, 00:17:35.390 "num_base_bdevs": 4, 00:17:35.390 "num_base_bdevs_discovered": 4, 00:17:35.390 "num_base_bdevs_operational": 4, 00:17:35.390 "process": { 00:17:35.390 "type": "rebuild", 00:17:35.390 "target": "spare", 00:17:35.390 "progress": { 00:17:35.390 "blocks": 21120, 00:17:35.390 "percent": 11 00:17:35.390 } 00:17:35.390 }, 00:17:35.390 "base_bdevs_list": [ 00:17:35.390 { 00:17:35.390 "name": "spare", 00:17:35.390 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:35.390 "is_configured": true, 00:17:35.390 "data_offset": 2048, 00:17:35.390 "data_size": 63488 00:17:35.390 }, 00:17:35.390 { 00:17:35.391 "name": "BaseBdev2", 00:17:35.391 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:35.391 "is_configured": true, 00:17:35.391 "data_offset": 2048, 00:17:35.391 "data_size": 63488 00:17:35.391 }, 00:17:35.391 { 00:17:35.391 "name": "BaseBdev3", 00:17:35.391 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:35.391 "is_configured": true, 00:17:35.391 "data_offset": 2048, 00:17:35.391 "data_size": 63488 00:17:35.391 }, 00:17:35.391 { 00:17:35.391 "name": "BaseBdev4", 00:17:35.391 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:35.391 "is_configured": true, 00:17:35.391 "data_offset": 2048, 00:17:35.391 "data_size": 63488 00:17:35.391 } 00:17:35.391 ] 00:17:35.391 }' 00:17:35.391 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.391 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.391 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.391 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.391 17:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.772 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.772 "name": "raid_bdev1", 00:17:36.772 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:36.772 "strip_size_kb": 64, 00:17:36.772 "state": "online", 00:17:36.772 "raid_level": "raid5f", 00:17:36.772 "superblock": true, 00:17:36.772 "num_base_bdevs": 4, 00:17:36.772 "num_base_bdevs_discovered": 4, 00:17:36.772 "num_base_bdevs_operational": 4, 00:17:36.772 "process": { 00:17:36.772 "type": "rebuild", 00:17:36.772 "target": "spare", 00:17:36.772 "progress": { 00:17:36.772 "blocks": 42240, 00:17:36.772 "percent": 22 00:17:36.772 } 00:17:36.772 }, 00:17:36.772 "base_bdevs_list": [ 00:17:36.772 { 00:17:36.772 "name": "spare", 00:17:36.772 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:36.772 "is_configured": true, 00:17:36.772 "data_offset": 2048, 00:17:36.772 "data_size": 63488 00:17:36.772 }, 00:17:36.772 { 00:17:36.772 "name": "BaseBdev2", 00:17:36.772 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:36.772 "is_configured": true, 00:17:36.772 "data_offset": 2048, 00:17:36.772 "data_size": 63488 00:17:36.772 }, 00:17:36.772 { 00:17:36.772 "name": "BaseBdev3", 00:17:36.772 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:36.772 "is_configured": true, 00:17:36.772 "data_offset": 2048, 00:17:36.772 "data_size": 63488 00:17:36.772 }, 00:17:36.772 { 00:17:36.772 "name": "BaseBdev4", 00:17:36.772 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:36.772 "is_configured": true, 00:17:36.772 "data_offset": 2048, 00:17:36.772 "data_size": 63488 00:17:36.773 } 00:17:36.773 ] 00:17:36.773 }' 00:17:36.773 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.773 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.773 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.773 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.773 17:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.713 "name": "raid_bdev1", 00:17:37.713 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:37.713 "strip_size_kb": 64, 00:17:37.713 "state": "online", 00:17:37.713 "raid_level": "raid5f", 00:17:37.713 "superblock": true, 00:17:37.713 "num_base_bdevs": 4, 00:17:37.713 "num_base_bdevs_discovered": 4, 00:17:37.713 "num_base_bdevs_operational": 4, 00:17:37.713 "process": { 00:17:37.713 "type": "rebuild", 00:17:37.713 "target": "spare", 00:17:37.713 "progress": { 00:17:37.713 "blocks": 65280, 00:17:37.713 "percent": 34 00:17:37.713 } 00:17:37.713 }, 00:17:37.713 "base_bdevs_list": [ 00:17:37.713 { 00:17:37.713 "name": "spare", 00:17:37.713 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:37.713 "is_configured": true, 00:17:37.713 "data_offset": 2048, 00:17:37.713 "data_size": 63488 00:17:37.713 }, 00:17:37.713 { 00:17:37.713 "name": "BaseBdev2", 00:17:37.713 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:37.713 "is_configured": true, 00:17:37.713 "data_offset": 2048, 00:17:37.713 "data_size": 63488 00:17:37.713 }, 00:17:37.713 { 00:17:37.713 "name": "BaseBdev3", 00:17:37.713 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:37.713 "is_configured": true, 00:17:37.713 "data_offset": 2048, 00:17:37.713 "data_size": 63488 00:17:37.713 }, 00:17:37.713 { 00:17:37.713 "name": "BaseBdev4", 00:17:37.713 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:37.713 "is_configured": true, 00:17:37.713 "data_offset": 2048, 00:17:37.713 "data_size": 63488 00:17:37.713 } 00:17:37.713 ] 00:17:37.713 }' 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.713 17:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.095 "name": "raid_bdev1", 00:17:39.095 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:39.095 "strip_size_kb": 64, 00:17:39.095 "state": "online", 00:17:39.095 "raid_level": "raid5f", 00:17:39.095 "superblock": true, 00:17:39.095 "num_base_bdevs": 4, 00:17:39.095 "num_base_bdevs_discovered": 4, 00:17:39.095 "num_base_bdevs_operational": 4, 00:17:39.095 "process": { 00:17:39.095 "type": "rebuild", 00:17:39.095 "target": "spare", 00:17:39.095 "progress": { 00:17:39.095 "blocks": 86400, 00:17:39.095 "percent": 45 00:17:39.095 } 00:17:39.095 }, 00:17:39.095 "base_bdevs_list": [ 00:17:39.095 { 00:17:39.095 "name": "spare", 00:17:39.095 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:39.095 "is_configured": true, 00:17:39.095 "data_offset": 2048, 00:17:39.095 "data_size": 63488 00:17:39.095 }, 00:17:39.095 { 00:17:39.095 "name": "BaseBdev2", 00:17:39.095 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:39.095 "is_configured": true, 00:17:39.095 "data_offset": 2048, 00:17:39.095 "data_size": 63488 00:17:39.095 }, 00:17:39.095 { 00:17:39.095 "name": "BaseBdev3", 00:17:39.095 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:39.095 "is_configured": true, 00:17:39.095 "data_offset": 2048, 00:17:39.095 "data_size": 63488 00:17:39.095 }, 00:17:39.095 { 00:17:39.095 "name": "BaseBdev4", 00:17:39.095 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:39.095 "is_configured": true, 00:17:39.095 "data_offset": 2048, 00:17:39.095 "data_size": 63488 00:17:39.095 } 00:17:39.095 ] 00:17:39.095 }' 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.095 17:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.037 17:52:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.037 17:52:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.037 "name": "raid_bdev1", 00:17:40.037 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:40.037 "strip_size_kb": 64, 00:17:40.037 "state": "online", 00:17:40.037 "raid_level": "raid5f", 00:17:40.037 "superblock": true, 00:17:40.037 "num_base_bdevs": 4, 00:17:40.037 "num_base_bdevs_discovered": 4, 00:17:40.037 "num_base_bdevs_operational": 4, 00:17:40.037 "process": { 00:17:40.037 "type": "rebuild", 00:17:40.037 "target": "spare", 00:17:40.037 "progress": { 00:17:40.037 "blocks": 109440, 00:17:40.037 "percent": 57 00:17:40.037 } 00:17:40.037 }, 00:17:40.037 "base_bdevs_list": [ 00:17:40.037 { 00:17:40.037 "name": "spare", 00:17:40.037 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:40.037 "is_configured": true, 00:17:40.037 "data_offset": 2048, 00:17:40.037 "data_size": 63488 00:17:40.037 }, 00:17:40.037 { 00:17:40.037 "name": "BaseBdev2", 00:17:40.037 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:40.037 "is_configured": true, 00:17:40.037 "data_offset": 2048, 00:17:40.037 "data_size": 63488 00:17:40.037 }, 00:17:40.037 { 00:17:40.037 "name": "BaseBdev3", 00:17:40.037 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:40.037 "is_configured": true, 00:17:40.037 "data_offset": 2048, 00:17:40.037 "data_size": 63488 00:17:40.037 }, 00:17:40.037 { 00:17:40.037 "name": "BaseBdev4", 00:17:40.037 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:40.037 "is_configured": true, 00:17:40.037 "data_offset": 2048, 00:17:40.037 "data_size": 63488 00:17:40.037 } 00:17:40.037 ] 00:17:40.037 }' 00:17:40.037 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.038 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.038 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.038 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.038 17:52:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.977 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.238 "name": "raid_bdev1", 00:17:41.238 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:41.238 "strip_size_kb": 64, 00:17:41.238 "state": "online", 00:17:41.238 "raid_level": "raid5f", 00:17:41.238 "superblock": true, 00:17:41.238 "num_base_bdevs": 4, 00:17:41.238 "num_base_bdevs_discovered": 4, 00:17:41.238 "num_base_bdevs_operational": 4, 00:17:41.238 "process": { 00:17:41.238 "type": "rebuild", 00:17:41.238 "target": "spare", 00:17:41.238 "progress": { 00:17:41.238 "blocks": 130560, 00:17:41.238 "percent": 68 00:17:41.238 } 00:17:41.238 }, 00:17:41.238 "base_bdevs_list": [ 00:17:41.238 { 00:17:41.238 "name": "spare", 00:17:41.238 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:41.238 "is_configured": true, 00:17:41.238 "data_offset": 2048, 00:17:41.238 "data_size": 63488 00:17:41.238 }, 00:17:41.238 { 00:17:41.238 "name": "BaseBdev2", 00:17:41.238 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:41.238 "is_configured": true, 00:17:41.238 "data_offset": 2048, 00:17:41.238 "data_size": 63488 00:17:41.238 }, 00:17:41.238 { 00:17:41.238 "name": "BaseBdev3", 00:17:41.238 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:41.238 "is_configured": true, 00:17:41.238 "data_offset": 2048, 00:17:41.238 "data_size": 63488 00:17:41.238 }, 00:17:41.238 { 00:17:41.238 "name": "BaseBdev4", 00:17:41.238 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:41.238 "is_configured": true, 00:17:41.238 "data_offset": 2048, 00:17:41.238 "data_size": 63488 00:17:41.238 } 00:17:41.238 ] 00:17:41.238 }' 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.238 17:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.177 "name": "raid_bdev1", 00:17:42.177 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:42.177 "strip_size_kb": 64, 00:17:42.177 "state": "online", 00:17:42.177 "raid_level": "raid5f", 00:17:42.177 "superblock": true, 00:17:42.177 "num_base_bdevs": 4, 00:17:42.177 "num_base_bdevs_discovered": 4, 00:17:42.177 "num_base_bdevs_operational": 4, 00:17:42.177 "process": { 00:17:42.177 "type": "rebuild", 00:17:42.177 "target": "spare", 00:17:42.177 "progress": { 00:17:42.177 "blocks": 151680, 00:17:42.177 "percent": 79 00:17:42.177 } 00:17:42.177 }, 00:17:42.177 "base_bdevs_list": [ 00:17:42.177 { 00:17:42.177 "name": "spare", 00:17:42.177 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:42.177 "is_configured": true, 00:17:42.177 "data_offset": 2048, 00:17:42.177 "data_size": 63488 00:17:42.177 }, 00:17:42.177 { 00:17:42.177 "name": "BaseBdev2", 00:17:42.177 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:42.177 "is_configured": true, 00:17:42.177 "data_offset": 2048, 00:17:42.177 "data_size": 63488 00:17:42.177 }, 00:17:42.177 { 00:17:42.177 "name": "BaseBdev3", 00:17:42.177 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:42.177 "is_configured": true, 00:17:42.177 "data_offset": 2048, 00:17:42.177 "data_size": 63488 00:17:42.177 }, 00:17:42.177 { 00:17:42.177 "name": "BaseBdev4", 00:17:42.177 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:42.177 "is_configured": true, 00:17:42.177 "data_offset": 2048, 00:17:42.177 "data_size": 63488 00:17:42.177 } 00:17:42.177 ] 00:17:42.177 }' 00:17:42.177 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.436 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.436 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.436 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.436 17:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.376 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.376 "name": "raid_bdev1", 00:17:43.377 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:43.377 "strip_size_kb": 64, 00:17:43.377 "state": "online", 00:17:43.377 "raid_level": "raid5f", 00:17:43.377 "superblock": true, 00:17:43.377 "num_base_bdevs": 4, 00:17:43.377 "num_base_bdevs_discovered": 4, 00:17:43.377 "num_base_bdevs_operational": 4, 00:17:43.377 "process": { 00:17:43.377 "type": "rebuild", 00:17:43.377 "target": "spare", 00:17:43.377 "progress": { 00:17:43.377 "blocks": 174720, 00:17:43.377 "percent": 91 00:17:43.377 } 00:17:43.377 }, 00:17:43.377 "base_bdevs_list": [ 00:17:43.377 { 00:17:43.377 "name": "spare", 00:17:43.377 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:43.377 "is_configured": true, 00:17:43.377 "data_offset": 2048, 00:17:43.377 "data_size": 63488 00:17:43.377 }, 00:17:43.377 { 00:17:43.377 "name": "BaseBdev2", 00:17:43.377 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:43.377 "is_configured": true, 00:17:43.377 "data_offset": 2048, 00:17:43.377 "data_size": 63488 00:17:43.377 }, 00:17:43.377 { 00:17:43.377 "name": "BaseBdev3", 00:17:43.377 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:43.377 "is_configured": true, 00:17:43.377 "data_offset": 2048, 00:17:43.377 "data_size": 63488 00:17:43.377 }, 00:17:43.377 { 00:17:43.377 "name": "BaseBdev4", 00:17:43.377 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:43.377 "is_configured": true, 00:17:43.377 "data_offset": 2048, 00:17:43.377 "data_size": 63488 00:17:43.377 } 00:17:43.377 ] 00:17:43.377 }' 00:17:43.377 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.377 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.377 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.637 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.637 17:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.207 [2024-11-20 17:52:11.319214] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:44.207 [2024-11-20 17:52:11.319284] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:44.207 [2024-11-20 17:52:11.319409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.471 "name": "raid_bdev1", 00:17:44.471 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:44.471 "strip_size_kb": 64, 00:17:44.471 "state": "online", 00:17:44.471 "raid_level": "raid5f", 00:17:44.471 "superblock": true, 00:17:44.471 "num_base_bdevs": 4, 00:17:44.471 "num_base_bdevs_discovered": 4, 00:17:44.471 "num_base_bdevs_operational": 4, 00:17:44.471 "base_bdevs_list": [ 00:17:44.471 { 00:17:44.471 "name": "spare", 00:17:44.471 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:44.471 "is_configured": true, 00:17:44.471 "data_offset": 2048, 00:17:44.471 "data_size": 63488 00:17:44.471 }, 00:17:44.471 { 00:17:44.471 "name": "BaseBdev2", 00:17:44.471 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:44.471 "is_configured": true, 00:17:44.471 "data_offset": 2048, 00:17:44.471 "data_size": 63488 00:17:44.471 }, 00:17:44.471 { 00:17:44.471 "name": "BaseBdev3", 00:17:44.471 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:44.471 "is_configured": true, 00:17:44.471 "data_offset": 2048, 00:17:44.471 "data_size": 63488 00:17:44.471 }, 00:17:44.471 { 00:17:44.471 "name": "BaseBdev4", 00:17:44.471 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:44.471 "is_configured": true, 00:17:44.471 "data_offset": 2048, 00:17:44.471 "data_size": 63488 00:17:44.471 } 00:17:44.471 ] 00:17:44.471 }' 00:17:44.471 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.732 "name": "raid_bdev1", 00:17:44.732 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:44.732 "strip_size_kb": 64, 00:17:44.732 "state": "online", 00:17:44.732 "raid_level": "raid5f", 00:17:44.732 "superblock": true, 00:17:44.732 "num_base_bdevs": 4, 00:17:44.732 "num_base_bdevs_discovered": 4, 00:17:44.732 "num_base_bdevs_operational": 4, 00:17:44.732 "base_bdevs_list": [ 00:17:44.732 { 00:17:44.732 "name": "spare", 00:17:44.732 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:44.732 "is_configured": true, 00:17:44.732 "data_offset": 2048, 00:17:44.732 "data_size": 63488 00:17:44.732 }, 00:17:44.732 { 00:17:44.732 "name": "BaseBdev2", 00:17:44.732 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:44.732 "is_configured": true, 00:17:44.732 "data_offset": 2048, 00:17:44.732 "data_size": 63488 00:17:44.732 }, 00:17:44.732 { 00:17:44.732 "name": "BaseBdev3", 00:17:44.732 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:44.732 "is_configured": true, 00:17:44.732 "data_offset": 2048, 00:17:44.732 "data_size": 63488 00:17:44.732 }, 00:17:44.732 { 00:17:44.732 "name": "BaseBdev4", 00:17:44.732 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:44.732 "is_configured": true, 00:17:44.732 "data_offset": 2048, 00:17:44.732 "data_size": 63488 00:17:44.732 } 00:17:44.732 ] 00:17:44.732 }' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.732 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.993 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.993 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.993 "name": "raid_bdev1", 00:17:44.993 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:44.993 "strip_size_kb": 64, 00:17:44.993 "state": "online", 00:17:44.993 "raid_level": "raid5f", 00:17:44.993 "superblock": true, 00:17:44.993 "num_base_bdevs": 4, 00:17:44.993 "num_base_bdevs_discovered": 4, 00:17:44.993 "num_base_bdevs_operational": 4, 00:17:44.993 "base_bdevs_list": [ 00:17:44.993 { 00:17:44.993 "name": "spare", 00:17:44.993 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:44.993 "is_configured": true, 00:17:44.993 "data_offset": 2048, 00:17:44.993 "data_size": 63488 00:17:44.993 }, 00:17:44.993 { 00:17:44.993 "name": "BaseBdev2", 00:17:44.993 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:44.993 "is_configured": true, 00:17:44.993 "data_offset": 2048, 00:17:44.993 "data_size": 63488 00:17:44.993 }, 00:17:44.993 { 00:17:44.993 "name": "BaseBdev3", 00:17:44.993 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:44.993 "is_configured": true, 00:17:44.993 "data_offset": 2048, 00:17:44.993 "data_size": 63488 00:17:44.993 }, 00:17:44.993 { 00:17:44.993 "name": "BaseBdev4", 00:17:44.993 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:44.993 "is_configured": true, 00:17:44.993 "data_offset": 2048, 00:17:44.993 "data_size": 63488 00:17:44.993 } 00:17:44.993 ] 00:17:44.993 }' 00:17:44.993 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.993 17:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.252 [2024-11-20 17:52:12.326041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.252 [2024-11-20 17:52:12.326122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.252 [2024-11-20 17:52:12.326242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.252 [2024-11-20 17:52:12.326378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.252 [2024-11-20 17:52:12.326443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:45.252 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.253 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:45.512 /dev/nbd0 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.512 1+0 records in 00:17:45.512 1+0 records out 00:17:45.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349502 s, 11.7 MB/s 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.512 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:45.772 /dev/nbd1 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.772 1+0 records in 00:17:45.772 1+0 records out 00:17:45.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340385 s, 12.0 MB/s 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.772 17:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.032 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:46.291 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.292 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.292 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.551 [2024-11-20 17:52:13.520808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.551 [2024-11-20 17:52:13.520871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.551 [2024-11-20 17:52:13.520905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:46.551 [2024-11-20 17:52:13.520915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.551 [2024-11-20 17:52:13.523589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.551 [2024-11-20 17:52:13.523626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.551 [2024-11-20 17:52:13.523731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:46.551 [2024-11-20 17:52:13.523792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.551 [2024-11-20 17:52:13.523950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.551 [2024-11-20 17:52:13.524059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.551 [2024-11-20 17:52:13.524168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.551 spare 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.551 [2024-11-20 17:52:13.624081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:46.551 [2024-11-20 17:52:13.624113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:46.551 [2024-11-20 17:52:13.624398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:46.551 [2024-11-20 17:52:13.631262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:46.551 [2024-11-20 17:52:13.631284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:46.551 [2024-11-20 17:52:13.631472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.551 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.552 "name": "raid_bdev1", 00:17:46.552 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:46.552 "strip_size_kb": 64, 00:17:46.552 "state": "online", 00:17:46.552 "raid_level": "raid5f", 00:17:46.552 "superblock": true, 00:17:46.552 "num_base_bdevs": 4, 00:17:46.552 "num_base_bdevs_discovered": 4, 00:17:46.552 "num_base_bdevs_operational": 4, 00:17:46.552 "base_bdevs_list": [ 00:17:46.552 { 00:17:46.552 "name": "spare", 00:17:46.552 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:46.552 "is_configured": true, 00:17:46.552 "data_offset": 2048, 00:17:46.552 "data_size": 63488 00:17:46.552 }, 00:17:46.552 { 00:17:46.552 "name": "BaseBdev2", 00:17:46.552 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:46.552 "is_configured": true, 00:17:46.552 "data_offset": 2048, 00:17:46.552 "data_size": 63488 00:17:46.552 }, 00:17:46.552 { 00:17:46.552 "name": "BaseBdev3", 00:17:46.552 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:46.552 "is_configured": true, 00:17:46.552 "data_offset": 2048, 00:17:46.552 "data_size": 63488 00:17:46.552 }, 00:17:46.552 { 00:17:46.552 "name": "BaseBdev4", 00:17:46.552 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:46.552 "is_configured": true, 00:17:46.552 "data_offset": 2048, 00:17:46.552 "data_size": 63488 00:17:46.552 } 00:17:46.552 ] 00:17:46.552 }' 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.552 17:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.123 "name": "raid_bdev1", 00:17:47.123 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:47.123 "strip_size_kb": 64, 00:17:47.123 "state": "online", 00:17:47.123 "raid_level": "raid5f", 00:17:47.123 "superblock": true, 00:17:47.123 "num_base_bdevs": 4, 00:17:47.123 "num_base_bdevs_discovered": 4, 00:17:47.123 "num_base_bdevs_operational": 4, 00:17:47.123 "base_bdevs_list": [ 00:17:47.123 { 00:17:47.123 "name": "spare", 00:17:47.123 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 }, 00:17:47.123 { 00:17:47.123 "name": "BaseBdev2", 00:17:47.123 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 }, 00:17:47.123 { 00:17:47.123 "name": "BaseBdev3", 00:17:47.123 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 }, 00:17:47.123 { 00:17:47.123 "name": "BaseBdev4", 00:17:47.123 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 } 00:17:47.123 ] 00:17:47.123 }' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.123 [2024-11-20 17:52:14.215786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.123 "name": "raid_bdev1", 00:17:47.123 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:47.123 "strip_size_kb": 64, 00:17:47.123 "state": "online", 00:17:47.123 "raid_level": "raid5f", 00:17:47.123 "superblock": true, 00:17:47.123 "num_base_bdevs": 4, 00:17:47.123 "num_base_bdevs_discovered": 3, 00:17:47.123 "num_base_bdevs_operational": 3, 00:17:47.123 "base_bdevs_list": [ 00:17:47.123 { 00:17:47.123 "name": null, 00:17:47.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.123 "is_configured": false, 00:17:47.123 "data_offset": 0, 00:17:47.123 "data_size": 63488 00:17:47.123 }, 00:17:47.123 { 00:17:47.123 "name": "BaseBdev2", 00:17:47.123 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 }, 00:17:47.123 { 00:17:47.123 "name": "BaseBdev3", 00:17:47.123 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 }, 00:17:47.123 { 00:17:47.123 "name": "BaseBdev4", 00:17:47.123 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:47.123 "is_configured": true, 00:17:47.123 "data_offset": 2048, 00:17:47.123 "data_size": 63488 00:17:47.123 } 00:17:47.123 ] 00:17:47.123 }' 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.123 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.694 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.694 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.694 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.694 [2024-11-20 17:52:14.651092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.694 [2024-11-20 17:52:14.651290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.694 [2024-11-20 17:52:14.651318] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.694 [2024-11-20 17:52:14.651365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.694 [2024-11-20 17:52:14.666097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:47.694 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.694 17:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:47.694 [2024-11-20 17:52:14.674810] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.634 "name": "raid_bdev1", 00:17:48.634 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:48.634 "strip_size_kb": 64, 00:17:48.634 "state": "online", 00:17:48.634 "raid_level": "raid5f", 00:17:48.634 "superblock": true, 00:17:48.634 "num_base_bdevs": 4, 00:17:48.634 "num_base_bdevs_discovered": 4, 00:17:48.634 "num_base_bdevs_operational": 4, 00:17:48.634 "process": { 00:17:48.634 "type": "rebuild", 00:17:48.634 "target": "spare", 00:17:48.634 "progress": { 00:17:48.634 "blocks": 19200, 00:17:48.634 "percent": 10 00:17:48.634 } 00:17:48.634 }, 00:17:48.634 "base_bdevs_list": [ 00:17:48.634 { 00:17:48.634 "name": "spare", 00:17:48.634 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:48.634 "is_configured": true, 00:17:48.634 "data_offset": 2048, 00:17:48.634 "data_size": 63488 00:17:48.634 }, 00:17:48.634 { 00:17:48.634 "name": "BaseBdev2", 00:17:48.634 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:48.634 "is_configured": true, 00:17:48.634 "data_offset": 2048, 00:17:48.634 "data_size": 63488 00:17:48.634 }, 00:17:48.634 { 00:17:48.634 "name": "BaseBdev3", 00:17:48.634 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:48.634 "is_configured": true, 00:17:48.634 "data_offset": 2048, 00:17:48.634 "data_size": 63488 00:17:48.634 }, 00:17:48.634 { 00:17:48.634 "name": "BaseBdev4", 00:17:48.634 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:48.634 "is_configured": true, 00:17:48.634 "data_offset": 2048, 00:17:48.634 "data_size": 63488 00:17:48.634 } 00:17:48.634 ] 00:17:48.634 }' 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.634 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.894 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.894 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.894 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 [2024-11-20 17:52:15.821828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.894 [2024-11-20 17:52:15.881848] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.895 [2024-11-20 17:52:15.881910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.895 [2024-11-20 17:52:15.881926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.895 [2024-11-20 17:52:15.881935] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.895 "name": "raid_bdev1", 00:17:48.895 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:48.895 "strip_size_kb": 64, 00:17:48.895 "state": "online", 00:17:48.895 "raid_level": "raid5f", 00:17:48.895 "superblock": true, 00:17:48.895 "num_base_bdevs": 4, 00:17:48.895 "num_base_bdevs_discovered": 3, 00:17:48.895 "num_base_bdevs_operational": 3, 00:17:48.895 "base_bdevs_list": [ 00:17:48.895 { 00:17:48.895 "name": null, 00:17:48.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.895 "is_configured": false, 00:17:48.895 "data_offset": 0, 00:17:48.895 "data_size": 63488 00:17:48.895 }, 00:17:48.895 { 00:17:48.895 "name": "BaseBdev2", 00:17:48.895 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:48.895 "is_configured": true, 00:17:48.895 "data_offset": 2048, 00:17:48.895 "data_size": 63488 00:17:48.895 }, 00:17:48.895 { 00:17:48.895 "name": "BaseBdev3", 00:17:48.895 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:48.895 "is_configured": true, 00:17:48.895 "data_offset": 2048, 00:17:48.895 "data_size": 63488 00:17:48.895 }, 00:17:48.895 { 00:17:48.895 "name": "BaseBdev4", 00:17:48.895 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:48.895 "is_configured": true, 00:17:48.895 "data_offset": 2048, 00:17:48.895 "data_size": 63488 00:17:48.895 } 00:17:48.895 ] 00:17:48.895 }' 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.895 17:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.465 17:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.466 17:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.466 17:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.466 [2024-11-20 17:52:16.361433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.466 [2024-11-20 17:52:16.361504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.466 [2024-11-20 17:52:16.361533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:49.466 [2024-11-20 17:52:16.361547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.466 [2024-11-20 17:52:16.362133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.466 [2024-11-20 17:52:16.362158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.466 [2024-11-20 17:52:16.362267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.466 [2024-11-20 17:52:16.362284] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.466 [2024-11-20 17:52:16.362294] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:49.466 [2024-11-20 17:52:16.362321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.466 [2024-11-20 17:52:16.376647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:49.466 spare 00:17:49.466 17:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.466 17:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:49.466 [2024-11-20 17:52:16.385113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.517 "name": "raid_bdev1", 00:17:50.517 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:50.517 "strip_size_kb": 64, 00:17:50.517 "state": "online", 00:17:50.517 "raid_level": "raid5f", 00:17:50.517 "superblock": true, 00:17:50.517 "num_base_bdevs": 4, 00:17:50.517 "num_base_bdevs_discovered": 4, 00:17:50.517 "num_base_bdevs_operational": 4, 00:17:50.517 "process": { 00:17:50.517 "type": "rebuild", 00:17:50.517 "target": "spare", 00:17:50.517 "progress": { 00:17:50.517 "blocks": 19200, 00:17:50.517 "percent": 10 00:17:50.517 } 00:17:50.517 }, 00:17:50.517 "base_bdevs_list": [ 00:17:50.517 { 00:17:50.517 "name": "spare", 00:17:50.517 "uuid": "094a4b2b-15ff-559f-b1c3-0c6c2111b8b5", 00:17:50.517 "is_configured": true, 00:17:50.517 "data_offset": 2048, 00:17:50.517 "data_size": 63488 00:17:50.517 }, 00:17:50.517 { 00:17:50.517 "name": "BaseBdev2", 00:17:50.517 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:50.517 "is_configured": true, 00:17:50.517 "data_offset": 2048, 00:17:50.517 "data_size": 63488 00:17:50.517 }, 00:17:50.517 { 00:17:50.517 "name": "BaseBdev3", 00:17:50.517 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:50.517 "is_configured": true, 00:17:50.517 "data_offset": 2048, 00:17:50.517 "data_size": 63488 00:17:50.517 }, 00:17:50.517 { 00:17:50.517 "name": "BaseBdev4", 00:17:50.517 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:50.517 "is_configured": true, 00:17:50.517 "data_offset": 2048, 00:17:50.517 "data_size": 63488 00:17:50.517 } 00:17:50.517 ] 00:17:50.517 }' 00:17:50.517 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.518 [2024-11-20 17:52:17.520361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.518 [2024-11-20 17:52:17.592360] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:50.518 [2024-11-20 17:52:17.592411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.518 [2024-11-20 17:52:17.592431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.518 [2024-11-20 17:52:17.592438] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.518 "name": "raid_bdev1", 00:17:50.518 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:50.518 "strip_size_kb": 64, 00:17:50.518 "state": "online", 00:17:50.518 "raid_level": "raid5f", 00:17:50.518 "superblock": true, 00:17:50.518 "num_base_bdevs": 4, 00:17:50.518 "num_base_bdevs_discovered": 3, 00:17:50.518 "num_base_bdevs_operational": 3, 00:17:50.518 "base_bdevs_list": [ 00:17:50.518 { 00:17:50.518 "name": null, 00:17:50.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.518 "is_configured": false, 00:17:50.518 "data_offset": 0, 00:17:50.518 "data_size": 63488 00:17:50.518 }, 00:17:50.518 { 00:17:50.518 "name": "BaseBdev2", 00:17:50.518 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:50.518 "is_configured": true, 00:17:50.518 "data_offset": 2048, 00:17:50.518 "data_size": 63488 00:17:50.518 }, 00:17:50.518 { 00:17:50.518 "name": "BaseBdev3", 00:17:50.518 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:50.518 "is_configured": true, 00:17:50.518 "data_offset": 2048, 00:17:50.518 "data_size": 63488 00:17:50.518 }, 00:17:50.518 { 00:17:50.518 "name": "BaseBdev4", 00:17:50.518 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:50.518 "is_configured": true, 00:17:50.518 "data_offset": 2048, 00:17:50.518 "data_size": 63488 00:17:50.518 } 00:17:50.518 ] 00:17:50.518 }' 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.518 17:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.088 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.088 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.088 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.089 "name": "raid_bdev1", 00:17:51.089 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:51.089 "strip_size_kb": 64, 00:17:51.089 "state": "online", 00:17:51.089 "raid_level": "raid5f", 00:17:51.089 "superblock": true, 00:17:51.089 "num_base_bdevs": 4, 00:17:51.089 "num_base_bdevs_discovered": 3, 00:17:51.089 "num_base_bdevs_operational": 3, 00:17:51.089 "base_bdevs_list": [ 00:17:51.089 { 00:17:51.089 "name": null, 00:17:51.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.089 "is_configured": false, 00:17:51.089 "data_offset": 0, 00:17:51.089 "data_size": 63488 00:17:51.089 }, 00:17:51.089 { 00:17:51.089 "name": "BaseBdev2", 00:17:51.089 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:51.089 "is_configured": true, 00:17:51.089 "data_offset": 2048, 00:17:51.089 "data_size": 63488 00:17:51.089 }, 00:17:51.089 { 00:17:51.089 "name": "BaseBdev3", 00:17:51.089 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:51.089 "is_configured": true, 00:17:51.089 "data_offset": 2048, 00:17:51.089 "data_size": 63488 00:17:51.089 }, 00:17:51.089 { 00:17:51.089 "name": "BaseBdev4", 00:17:51.089 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:51.089 "is_configured": true, 00:17:51.089 "data_offset": 2048, 00:17:51.089 "data_size": 63488 00:17:51.089 } 00:17:51.089 ] 00:17:51.089 }' 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.089 [2024-11-20 17:52:18.215829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:51.089 [2024-11-20 17:52:18.215885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.089 [2024-11-20 17:52:18.215911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:51.089 [2024-11-20 17:52:18.215921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.089 [2024-11-20 17:52:18.216467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.089 [2024-11-20 17:52:18.216485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:51.089 [2024-11-20 17:52:18.216580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:51.089 [2024-11-20 17:52:18.216595] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.089 [2024-11-20 17:52:18.216609] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:51.089 [2024-11-20 17:52:18.216619] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:51.089 BaseBdev1 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.089 17:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.470 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.470 "name": "raid_bdev1", 00:17:52.470 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:52.470 "strip_size_kb": 64, 00:17:52.470 "state": "online", 00:17:52.470 "raid_level": "raid5f", 00:17:52.470 "superblock": true, 00:17:52.470 "num_base_bdevs": 4, 00:17:52.470 "num_base_bdevs_discovered": 3, 00:17:52.470 "num_base_bdevs_operational": 3, 00:17:52.470 "base_bdevs_list": [ 00:17:52.470 { 00:17:52.470 "name": null, 00:17:52.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.470 "is_configured": false, 00:17:52.470 "data_offset": 0, 00:17:52.470 "data_size": 63488 00:17:52.470 }, 00:17:52.470 { 00:17:52.470 "name": "BaseBdev2", 00:17:52.470 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:52.470 "is_configured": true, 00:17:52.470 "data_offset": 2048, 00:17:52.470 "data_size": 63488 00:17:52.470 }, 00:17:52.470 { 00:17:52.471 "name": "BaseBdev3", 00:17:52.471 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:52.471 "is_configured": true, 00:17:52.471 "data_offset": 2048, 00:17:52.471 "data_size": 63488 00:17:52.471 }, 00:17:52.471 { 00:17:52.471 "name": "BaseBdev4", 00:17:52.471 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:52.471 "is_configured": true, 00:17:52.471 "data_offset": 2048, 00:17:52.471 "data_size": 63488 00:17:52.471 } 00:17:52.471 ] 00:17:52.471 }' 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.471 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.731 "name": "raid_bdev1", 00:17:52.731 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:52.731 "strip_size_kb": 64, 00:17:52.731 "state": "online", 00:17:52.731 "raid_level": "raid5f", 00:17:52.731 "superblock": true, 00:17:52.731 "num_base_bdevs": 4, 00:17:52.731 "num_base_bdevs_discovered": 3, 00:17:52.731 "num_base_bdevs_operational": 3, 00:17:52.731 "base_bdevs_list": [ 00:17:52.731 { 00:17:52.731 "name": null, 00:17:52.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.731 "is_configured": false, 00:17:52.731 "data_offset": 0, 00:17:52.731 "data_size": 63488 00:17:52.731 }, 00:17:52.731 { 00:17:52.731 "name": "BaseBdev2", 00:17:52.731 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:52.731 "is_configured": true, 00:17:52.731 "data_offset": 2048, 00:17:52.731 "data_size": 63488 00:17:52.731 }, 00:17:52.731 { 00:17:52.731 "name": "BaseBdev3", 00:17:52.731 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:52.731 "is_configured": true, 00:17:52.731 "data_offset": 2048, 00:17:52.731 "data_size": 63488 00:17:52.731 }, 00:17:52.731 { 00:17:52.731 "name": "BaseBdev4", 00:17:52.731 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:52.731 "is_configured": true, 00:17:52.731 "data_offset": 2048, 00:17:52.731 "data_size": 63488 00:17:52.731 } 00:17:52.731 ] 00:17:52.731 }' 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.731 [2024-11-20 17:52:19.789187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.731 [2024-11-20 17:52:19.789398] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.731 [2024-11-20 17:52:19.789415] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:52.731 request: 00:17:52.731 { 00:17:52.731 "base_bdev": "BaseBdev1", 00:17:52.731 "raid_bdev": "raid_bdev1", 00:17:52.731 "method": "bdev_raid_add_base_bdev", 00:17:52.731 "req_id": 1 00:17:52.731 } 00:17:52.731 Got JSON-RPC error response 00:17:52.731 response: 00:17:52.731 { 00:17:52.731 "code": -22, 00:17:52.731 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:52.731 } 00:17:52.731 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:52.732 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:52.732 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.732 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.732 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.732 17:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.671 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.931 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.931 "name": "raid_bdev1", 00:17:53.931 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:53.931 "strip_size_kb": 64, 00:17:53.931 "state": "online", 00:17:53.931 "raid_level": "raid5f", 00:17:53.931 "superblock": true, 00:17:53.931 "num_base_bdevs": 4, 00:17:53.931 "num_base_bdevs_discovered": 3, 00:17:53.931 "num_base_bdevs_operational": 3, 00:17:53.931 "base_bdevs_list": [ 00:17:53.931 { 00:17:53.931 "name": null, 00:17:53.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.931 "is_configured": false, 00:17:53.931 "data_offset": 0, 00:17:53.931 "data_size": 63488 00:17:53.931 }, 00:17:53.931 { 00:17:53.931 "name": "BaseBdev2", 00:17:53.931 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:53.931 "is_configured": true, 00:17:53.931 "data_offset": 2048, 00:17:53.931 "data_size": 63488 00:17:53.931 }, 00:17:53.931 { 00:17:53.931 "name": "BaseBdev3", 00:17:53.931 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:53.931 "is_configured": true, 00:17:53.931 "data_offset": 2048, 00:17:53.931 "data_size": 63488 00:17:53.931 }, 00:17:53.931 { 00:17:53.931 "name": "BaseBdev4", 00:17:53.931 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:53.931 "is_configured": true, 00:17:53.931 "data_offset": 2048, 00:17:53.931 "data_size": 63488 00:17:53.931 } 00:17:53.931 ] 00:17:53.931 }' 00:17:53.931 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.931 17:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.191 "name": "raid_bdev1", 00:17:54.191 "uuid": "ea382b94-08d9-46b4-82ab-146bd4e667fa", 00:17:54.191 "strip_size_kb": 64, 00:17:54.191 "state": "online", 00:17:54.191 "raid_level": "raid5f", 00:17:54.191 "superblock": true, 00:17:54.191 "num_base_bdevs": 4, 00:17:54.191 "num_base_bdevs_discovered": 3, 00:17:54.191 "num_base_bdevs_operational": 3, 00:17:54.191 "base_bdevs_list": [ 00:17:54.191 { 00:17:54.191 "name": null, 00:17:54.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.191 "is_configured": false, 00:17:54.191 "data_offset": 0, 00:17:54.191 "data_size": 63488 00:17:54.191 }, 00:17:54.191 { 00:17:54.191 "name": "BaseBdev2", 00:17:54.191 "uuid": "c3a196fa-9f5a-5cd2-8704-414905d582ca", 00:17:54.191 "is_configured": true, 00:17:54.191 "data_offset": 2048, 00:17:54.191 "data_size": 63488 00:17:54.191 }, 00:17:54.191 { 00:17:54.191 "name": "BaseBdev3", 00:17:54.191 "uuid": "d11a254e-8e9d-51d0-8ad1-ecb7c5463579", 00:17:54.191 "is_configured": true, 00:17:54.191 "data_offset": 2048, 00:17:54.191 "data_size": 63488 00:17:54.191 }, 00:17:54.191 { 00:17:54.191 "name": "BaseBdev4", 00:17:54.191 "uuid": "dc55f54a-2bfc-509d-bcb3-96a8a1651a4f", 00:17:54.191 "is_configured": true, 00:17:54.191 "data_offset": 2048, 00:17:54.191 "data_size": 63488 00:17:54.191 } 00:17:54.191 ] 00:17:54.191 }' 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85586 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85586 ']' 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85586 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.191 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85586 00:17:54.450 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.450 killing process with pid 85586 00:17:54.450 Received shutdown signal, test time was about 60.000000 seconds 00:17:54.450 00:17:54.450 Latency(us) 00:17:54.450 [2024-11-20T17:52:21.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.450 [2024-11-20T17:52:21.626Z] =================================================================================================================== 00:17:54.450 [2024-11-20T17:52:21.626Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.450 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.450 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85586' 00:17:54.450 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85586 00:17:54.450 [2024-11-20 17:52:21.389650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.450 [2024-11-20 17:52:21.389792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.450 17:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85586 00:17:54.450 [2024-11-20 17:52:21.389877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.450 [2024-11-20 17:52:21.389890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:55.018 [2024-11-20 17:52:21.884092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.957 17:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:55.957 00:17:55.957 real 0m26.879s 00:17:55.957 user 0m33.457s 00:17:55.957 sys 0m3.127s 00:17:55.957 17:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.957 17:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.957 ************************************ 00:17:55.957 END TEST raid5f_rebuild_test_sb 00:17:55.957 ************************************ 00:17:55.957 17:52:23 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:55.957 17:52:23 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:55.957 17:52:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:55.957 17:52:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.957 17:52:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.957 ************************************ 00:17:55.957 START TEST raid_state_function_test_sb_4k 00:17:55.957 ************************************ 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86401 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86401' 00:17:55.957 Process raid pid: 86401 00:17:55.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86401 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86401 ']' 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.957 17:52:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.231 [2024-11-20 17:52:23.217717] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:56.231 [2024-11-20 17:52:23.217919] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.231 [2024-11-20 17:52:23.397555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.490 [2024-11-20 17:52:23.528246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.750 [2024-11-20 17:52:23.765490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.750 [2024-11-20 17:52:23.765535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.010 [2024-11-20 17:52:24.050214] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.010 [2024-11-20 17:52:24.050318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.010 [2024-11-20 17:52:24.050348] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.010 [2024-11-20 17:52:24.050371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.010 "name": "Existed_Raid", 00:17:57.010 "uuid": "56808b0f-137d-408c-8149-0ea4e56ef403", 00:17:57.010 "strip_size_kb": 0, 00:17:57.010 "state": "configuring", 00:17:57.010 "raid_level": "raid1", 00:17:57.010 "superblock": true, 00:17:57.010 "num_base_bdevs": 2, 00:17:57.010 "num_base_bdevs_discovered": 0, 00:17:57.010 "num_base_bdevs_operational": 2, 00:17:57.010 "base_bdevs_list": [ 00:17:57.010 { 00:17:57.010 "name": "BaseBdev1", 00:17:57.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.010 "is_configured": false, 00:17:57.010 "data_offset": 0, 00:17:57.010 "data_size": 0 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": "BaseBdev2", 00:17:57.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.010 "is_configured": false, 00:17:57.010 "data_offset": 0, 00:17:57.010 "data_size": 0 00:17:57.010 } 00:17:57.010 ] 00:17:57.010 }' 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.010 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.580 [2024-11-20 17:52:24.489359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.580 [2024-11-20 17:52:24.489457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.580 [2024-11-20 17:52:24.497343] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.580 [2024-11-20 17:52:24.497429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.580 [2024-11-20 17:52:24.497460] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.580 [2024-11-20 17:52:24.497487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.580 [2024-11-20 17:52:24.541712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.580 BaseBdev1 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.580 [ 00:17:57.580 { 00:17:57.580 "name": "BaseBdev1", 00:17:57.580 "aliases": [ 00:17:57.580 "0e425b8d-00a0-4da8-91c0-3edc9bc25684" 00:17:57.580 ], 00:17:57.580 "product_name": "Malloc disk", 00:17:57.580 "block_size": 4096, 00:17:57.580 "num_blocks": 8192, 00:17:57.580 "uuid": "0e425b8d-00a0-4da8-91c0-3edc9bc25684", 00:17:57.580 "assigned_rate_limits": { 00:17:57.580 "rw_ios_per_sec": 0, 00:17:57.580 "rw_mbytes_per_sec": 0, 00:17:57.580 "r_mbytes_per_sec": 0, 00:17:57.580 "w_mbytes_per_sec": 0 00:17:57.580 }, 00:17:57.580 "claimed": true, 00:17:57.580 "claim_type": "exclusive_write", 00:17:57.580 "zoned": false, 00:17:57.580 "supported_io_types": { 00:17:57.580 "read": true, 00:17:57.580 "write": true, 00:17:57.580 "unmap": true, 00:17:57.580 "flush": true, 00:17:57.580 "reset": true, 00:17:57.580 "nvme_admin": false, 00:17:57.580 "nvme_io": false, 00:17:57.580 "nvme_io_md": false, 00:17:57.580 "write_zeroes": true, 00:17:57.580 "zcopy": true, 00:17:57.580 "get_zone_info": false, 00:17:57.580 "zone_management": false, 00:17:57.580 "zone_append": false, 00:17:57.580 "compare": false, 00:17:57.580 "compare_and_write": false, 00:17:57.580 "abort": true, 00:17:57.580 "seek_hole": false, 00:17:57.580 "seek_data": false, 00:17:57.580 "copy": true, 00:17:57.580 "nvme_iov_md": false 00:17:57.580 }, 00:17:57.580 "memory_domains": [ 00:17:57.580 { 00:17:57.580 "dma_device_id": "system", 00:17:57.580 "dma_device_type": 1 00:17:57.580 }, 00:17:57.580 { 00:17:57.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.580 "dma_device_type": 2 00:17:57.580 } 00:17:57.580 ], 00:17:57.580 "driver_specific": {} 00:17:57.580 } 00:17:57.580 ] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.580 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.581 "name": "Existed_Raid", 00:17:57.581 "uuid": "6404f833-522e-40a6-a0ea-2d452a1fe9ba", 00:17:57.581 "strip_size_kb": 0, 00:17:57.581 "state": "configuring", 00:17:57.581 "raid_level": "raid1", 00:17:57.581 "superblock": true, 00:17:57.581 "num_base_bdevs": 2, 00:17:57.581 "num_base_bdevs_discovered": 1, 00:17:57.581 "num_base_bdevs_operational": 2, 00:17:57.581 "base_bdevs_list": [ 00:17:57.581 { 00:17:57.581 "name": "BaseBdev1", 00:17:57.581 "uuid": "0e425b8d-00a0-4da8-91c0-3edc9bc25684", 00:17:57.581 "is_configured": true, 00:17:57.581 "data_offset": 256, 00:17:57.581 "data_size": 7936 00:17:57.581 }, 00:17:57.581 { 00:17:57.581 "name": "BaseBdev2", 00:17:57.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.581 "is_configured": false, 00:17:57.581 "data_offset": 0, 00:17:57.581 "data_size": 0 00:17:57.581 } 00:17:57.581 ] 00:17:57.581 }' 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.581 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.841 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.841 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.841 17:52:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.841 [2024-11-20 17:52:25.001020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.841 [2024-11-20 17:52:25.001096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:57.841 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.841 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:57.841 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.841 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.841 [2024-11-20 17:52:25.009076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.841 [2024-11-20 17:52:25.011213] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.841 [2024-11-20 17:52:25.011298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.101 "name": "Existed_Raid", 00:17:58.101 "uuid": "64fdd04c-6ff9-4b0b-a253-8d75f6a18341", 00:17:58.101 "strip_size_kb": 0, 00:17:58.101 "state": "configuring", 00:17:58.101 "raid_level": "raid1", 00:17:58.101 "superblock": true, 00:17:58.101 "num_base_bdevs": 2, 00:17:58.101 "num_base_bdevs_discovered": 1, 00:17:58.101 "num_base_bdevs_operational": 2, 00:17:58.101 "base_bdevs_list": [ 00:17:58.101 { 00:17:58.101 "name": "BaseBdev1", 00:17:58.101 "uuid": "0e425b8d-00a0-4da8-91c0-3edc9bc25684", 00:17:58.101 "is_configured": true, 00:17:58.101 "data_offset": 256, 00:17:58.101 "data_size": 7936 00:17:58.101 }, 00:17:58.101 { 00:17:58.101 "name": "BaseBdev2", 00:17:58.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.101 "is_configured": false, 00:17:58.101 "data_offset": 0, 00:17:58.101 "data_size": 0 00:17:58.101 } 00:17:58.101 ] 00:17:58.101 }' 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.101 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.361 [2024-11-20 17:52:25.498596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.361 BaseBdev2 00:17:58.361 [2024-11-20 17:52:25.498986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.361 [2024-11-20 17:52:25.499024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.361 [2024-11-20 17:52:25.499322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:58.361 [2024-11-20 17:52:25.499511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.361 [2024-11-20 17:52:25.499527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:58.361 [2024-11-20 17:52:25.499681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.361 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.361 [ 00:17:58.361 { 00:17:58.361 "name": "BaseBdev2", 00:17:58.361 "aliases": [ 00:17:58.361 "fde43988-67fb-4b4f-9662-f9e18683ce4e" 00:17:58.361 ], 00:17:58.361 "product_name": "Malloc disk", 00:17:58.361 "block_size": 4096, 00:17:58.361 "num_blocks": 8192, 00:17:58.361 "uuid": "fde43988-67fb-4b4f-9662-f9e18683ce4e", 00:17:58.361 "assigned_rate_limits": { 00:17:58.361 "rw_ios_per_sec": 0, 00:17:58.361 "rw_mbytes_per_sec": 0, 00:17:58.361 "r_mbytes_per_sec": 0, 00:17:58.361 "w_mbytes_per_sec": 0 00:17:58.361 }, 00:17:58.361 "claimed": true, 00:17:58.361 "claim_type": "exclusive_write", 00:17:58.361 "zoned": false, 00:17:58.361 "supported_io_types": { 00:17:58.361 "read": true, 00:17:58.361 "write": true, 00:17:58.361 "unmap": true, 00:17:58.361 "flush": true, 00:17:58.361 "reset": true, 00:17:58.622 "nvme_admin": false, 00:17:58.622 "nvme_io": false, 00:17:58.622 "nvme_io_md": false, 00:17:58.622 "write_zeroes": true, 00:17:58.622 "zcopy": true, 00:17:58.622 "get_zone_info": false, 00:17:58.622 "zone_management": false, 00:17:58.622 "zone_append": false, 00:17:58.622 "compare": false, 00:17:58.622 "compare_and_write": false, 00:17:58.622 "abort": true, 00:17:58.622 "seek_hole": false, 00:17:58.622 "seek_data": false, 00:17:58.622 "copy": true, 00:17:58.622 "nvme_iov_md": false 00:17:58.622 }, 00:17:58.622 "memory_domains": [ 00:17:58.622 { 00:17:58.622 "dma_device_id": "system", 00:17:58.622 "dma_device_type": 1 00:17:58.622 }, 00:17:58.622 { 00:17:58.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.622 "dma_device_type": 2 00:17:58.622 } 00:17:58.622 ], 00:17:58.622 "driver_specific": {} 00:17:58.622 } 00:17:58.622 ] 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.622 "name": "Existed_Raid", 00:17:58.622 "uuid": "64fdd04c-6ff9-4b0b-a253-8d75f6a18341", 00:17:58.622 "strip_size_kb": 0, 00:17:58.622 "state": "online", 00:17:58.622 "raid_level": "raid1", 00:17:58.622 "superblock": true, 00:17:58.622 "num_base_bdevs": 2, 00:17:58.622 "num_base_bdevs_discovered": 2, 00:17:58.622 "num_base_bdevs_operational": 2, 00:17:58.622 "base_bdevs_list": [ 00:17:58.622 { 00:17:58.622 "name": "BaseBdev1", 00:17:58.622 "uuid": "0e425b8d-00a0-4da8-91c0-3edc9bc25684", 00:17:58.622 "is_configured": true, 00:17:58.622 "data_offset": 256, 00:17:58.622 "data_size": 7936 00:17:58.622 }, 00:17:58.622 { 00:17:58.622 "name": "BaseBdev2", 00:17:58.622 "uuid": "fde43988-67fb-4b4f-9662-f9e18683ce4e", 00:17:58.622 "is_configured": true, 00:17:58.622 "data_offset": 256, 00:17:58.622 "data_size": 7936 00:17:58.622 } 00:17:58.622 ] 00:17:58.622 }' 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.622 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.883 [2024-11-20 17:52:25.958047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.883 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.883 "name": "Existed_Raid", 00:17:58.883 "aliases": [ 00:17:58.883 "64fdd04c-6ff9-4b0b-a253-8d75f6a18341" 00:17:58.883 ], 00:17:58.883 "product_name": "Raid Volume", 00:17:58.883 "block_size": 4096, 00:17:58.883 "num_blocks": 7936, 00:17:58.883 "uuid": "64fdd04c-6ff9-4b0b-a253-8d75f6a18341", 00:17:58.883 "assigned_rate_limits": { 00:17:58.883 "rw_ios_per_sec": 0, 00:17:58.883 "rw_mbytes_per_sec": 0, 00:17:58.883 "r_mbytes_per_sec": 0, 00:17:58.883 "w_mbytes_per_sec": 0 00:17:58.883 }, 00:17:58.883 "claimed": false, 00:17:58.883 "zoned": false, 00:17:58.883 "supported_io_types": { 00:17:58.883 "read": true, 00:17:58.883 "write": true, 00:17:58.883 "unmap": false, 00:17:58.883 "flush": false, 00:17:58.883 "reset": true, 00:17:58.883 "nvme_admin": false, 00:17:58.883 "nvme_io": false, 00:17:58.883 "nvme_io_md": false, 00:17:58.883 "write_zeroes": true, 00:17:58.883 "zcopy": false, 00:17:58.883 "get_zone_info": false, 00:17:58.883 "zone_management": false, 00:17:58.883 "zone_append": false, 00:17:58.883 "compare": false, 00:17:58.883 "compare_and_write": false, 00:17:58.883 "abort": false, 00:17:58.883 "seek_hole": false, 00:17:58.883 "seek_data": false, 00:17:58.883 "copy": false, 00:17:58.883 "nvme_iov_md": false 00:17:58.883 }, 00:17:58.883 "memory_domains": [ 00:17:58.883 { 00:17:58.883 "dma_device_id": "system", 00:17:58.883 "dma_device_type": 1 00:17:58.883 }, 00:17:58.883 { 00:17:58.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.883 "dma_device_type": 2 00:17:58.883 }, 00:17:58.883 { 00:17:58.883 "dma_device_id": "system", 00:17:58.883 "dma_device_type": 1 00:17:58.883 }, 00:17:58.883 { 00:17:58.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.883 "dma_device_type": 2 00:17:58.883 } 00:17:58.883 ], 00:17:58.883 "driver_specific": { 00:17:58.883 "raid": { 00:17:58.883 "uuid": "64fdd04c-6ff9-4b0b-a253-8d75f6a18341", 00:17:58.883 "strip_size_kb": 0, 00:17:58.883 "state": "online", 00:17:58.883 "raid_level": "raid1", 00:17:58.883 "superblock": true, 00:17:58.883 "num_base_bdevs": 2, 00:17:58.883 "num_base_bdevs_discovered": 2, 00:17:58.883 "num_base_bdevs_operational": 2, 00:17:58.883 "base_bdevs_list": [ 00:17:58.883 { 00:17:58.883 "name": "BaseBdev1", 00:17:58.883 "uuid": "0e425b8d-00a0-4da8-91c0-3edc9bc25684", 00:17:58.883 "is_configured": true, 00:17:58.883 "data_offset": 256, 00:17:58.883 "data_size": 7936 00:17:58.883 }, 00:17:58.883 { 00:17:58.883 "name": "BaseBdev2", 00:17:58.883 "uuid": "fde43988-67fb-4b4f-9662-f9e18683ce4e", 00:17:58.883 "is_configured": true, 00:17:58.883 "data_offset": 256, 00:17:58.883 "data_size": 7936 00:17:58.883 } 00:17:58.883 ] 00:17:58.883 } 00:17:58.883 } 00:17:58.883 }' 00:17:58.884 17:52:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.884 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:58.884 BaseBdev2' 00:17:58.884 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 [2024-11-20 17:52:26.181463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.404 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.404 "name": "Existed_Raid", 00:17:59.404 "uuid": "64fdd04c-6ff9-4b0b-a253-8d75f6a18341", 00:17:59.404 "strip_size_kb": 0, 00:17:59.404 "state": "online", 00:17:59.404 "raid_level": "raid1", 00:17:59.404 "superblock": true, 00:17:59.404 "num_base_bdevs": 2, 00:17:59.404 "num_base_bdevs_discovered": 1, 00:17:59.404 "num_base_bdevs_operational": 1, 00:17:59.404 "base_bdevs_list": [ 00:17:59.404 { 00:17:59.404 "name": null, 00:17:59.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.404 "is_configured": false, 00:17:59.404 "data_offset": 0, 00:17:59.404 "data_size": 7936 00:17:59.404 }, 00:17:59.404 { 00:17:59.404 "name": "BaseBdev2", 00:17:59.404 "uuid": "fde43988-67fb-4b4f-9662-f9e18683ce4e", 00:17:59.404 "is_configured": true, 00:17:59.404 "data_offset": 256, 00:17:59.404 "data_size": 7936 00:17:59.404 } 00:17:59.404 ] 00:17:59.404 }' 00:17:59.404 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.404 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.664 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.664 [2024-11-20 17:52:26.786071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.664 [2024-11-20 17:52:26.786248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.924 [2024-11-20 17:52:26.885471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.924 [2024-11-20 17:52:26.885623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.924 [2024-11-20 17:52:26.885668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:59.924 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86401 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86401 ']' 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86401 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86401 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86401' 00:17:59.925 killing process with pid 86401 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86401 00:17:59.925 [2024-11-20 17:52:26.983433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.925 17:52:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86401 00:17:59.925 [2024-11-20 17:52:26.999800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.307 17:52:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:01.307 00:18:01.307 real 0m5.053s 00:18:01.307 user 0m7.123s 00:18:01.307 sys 0m0.979s 00:18:01.307 ************************************ 00:18:01.307 END TEST raid_state_function_test_sb_4k 00:18:01.307 ************************************ 00:18:01.307 17:52:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.307 17:52:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.307 17:52:28 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:01.307 17:52:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:01.307 17:52:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.307 17:52:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.307 ************************************ 00:18:01.307 START TEST raid_superblock_test_4k 00:18:01.307 ************************************ 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86649 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86649 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86649 ']' 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.307 17:52:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.307 [2024-11-20 17:52:28.353302] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:01.307 [2024-11-20 17:52:28.353553] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86649 ] 00:18:01.567 [2024-11-20 17:52:28.532901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.567 [2024-11-20 17:52:28.661281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.826 [2024-11-20 17:52:28.894227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.826 [2024-11-20 17:52:28.894317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.092 malloc1 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.092 [2024-11-20 17:52:29.225383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.092 [2024-11-20 17:52:29.225506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.092 [2024-11-20 17:52:29.225550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:02.092 [2024-11-20 17:52:29.225582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.092 [2024-11-20 17:52:29.228040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.092 [2024-11-20 17:52:29.228122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.092 pt1 00:18:02.092 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.093 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.354 malloc2 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.354 [2024-11-20 17:52:29.289741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.354 [2024-11-20 17:52:29.289797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.354 [2024-11-20 17:52:29.289826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.354 [2024-11-20 17:52:29.289836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.354 [2024-11-20 17:52:29.292247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.354 [2024-11-20 17:52:29.292292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.354 pt2 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.354 [2024-11-20 17:52:29.301784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.354 [2024-11-20 17:52:29.303891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.354 [2024-11-20 17:52:29.304085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:02.354 [2024-11-20 17:52:29.304102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.354 [2024-11-20 17:52:29.304337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:02.354 [2024-11-20 17:52:29.304497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:02.354 [2024-11-20 17:52:29.304520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:02.354 [2024-11-20 17:52:29.304703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.354 "name": "raid_bdev1", 00:18:02.354 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:02.354 "strip_size_kb": 0, 00:18:02.354 "state": "online", 00:18:02.354 "raid_level": "raid1", 00:18:02.354 "superblock": true, 00:18:02.354 "num_base_bdevs": 2, 00:18:02.354 "num_base_bdevs_discovered": 2, 00:18:02.354 "num_base_bdevs_operational": 2, 00:18:02.354 "base_bdevs_list": [ 00:18:02.354 { 00:18:02.354 "name": "pt1", 00:18:02.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.354 "is_configured": true, 00:18:02.354 "data_offset": 256, 00:18:02.354 "data_size": 7936 00:18:02.354 }, 00:18:02.354 { 00:18:02.354 "name": "pt2", 00:18:02.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.354 "is_configured": true, 00:18:02.354 "data_offset": 256, 00:18:02.354 "data_size": 7936 00:18:02.354 } 00:18:02.354 ] 00:18:02.354 }' 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.354 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.614 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.874 [2024-11-20 17:52:29.793153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.874 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.874 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.874 "name": "raid_bdev1", 00:18:02.874 "aliases": [ 00:18:02.874 "003dfe4c-729d-4306-8073-7703e41854ce" 00:18:02.874 ], 00:18:02.874 "product_name": "Raid Volume", 00:18:02.874 "block_size": 4096, 00:18:02.874 "num_blocks": 7936, 00:18:02.874 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:02.874 "assigned_rate_limits": { 00:18:02.874 "rw_ios_per_sec": 0, 00:18:02.874 "rw_mbytes_per_sec": 0, 00:18:02.874 "r_mbytes_per_sec": 0, 00:18:02.874 "w_mbytes_per_sec": 0 00:18:02.874 }, 00:18:02.874 "claimed": false, 00:18:02.874 "zoned": false, 00:18:02.874 "supported_io_types": { 00:18:02.874 "read": true, 00:18:02.874 "write": true, 00:18:02.874 "unmap": false, 00:18:02.874 "flush": false, 00:18:02.874 "reset": true, 00:18:02.874 "nvme_admin": false, 00:18:02.874 "nvme_io": false, 00:18:02.874 "nvme_io_md": false, 00:18:02.874 "write_zeroes": true, 00:18:02.874 "zcopy": false, 00:18:02.874 "get_zone_info": false, 00:18:02.874 "zone_management": false, 00:18:02.874 "zone_append": false, 00:18:02.874 "compare": false, 00:18:02.874 "compare_and_write": false, 00:18:02.874 "abort": false, 00:18:02.874 "seek_hole": false, 00:18:02.874 "seek_data": false, 00:18:02.874 "copy": false, 00:18:02.874 "nvme_iov_md": false 00:18:02.875 }, 00:18:02.875 "memory_domains": [ 00:18:02.875 { 00:18:02.875 "dma_device_id": "system", 00:18:02.875 "dma_device_type": 1 00:18:02.875 }, 00:18:02.875 { 00:18:02.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.875 "dma_device_type": 2 00:18:02.875 }, 00:18:02.875 { 00:18:02.875 "dma_device_id": "system", 00:18:02.875 "dma_device_type": 1 00:18:02.875 }, 00:18:02.875 { 00:18:02.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.875 "dma_device_type": 2 00:18:02.875 } 00:18:02.875 ], 00:18:02.875 "driver_specific": { 00:18:02.875 "raid": { 00:18:02.875 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:02.875 "strip_size_kb": 0, 00:18:02.875 "state": "online", 00:18:02.875 "raid_level": "raid1", 00:18:02.875 "superblock": true, 00:18:02.875 "num_base_bdevs": 2, 00:18:02.875 "num_base_bdevs_discovered": 2, 00:18:02.875 "num_base_bdevs_operational": 2, 00:18:02.875 "base_bdevs_list": [ 00:18:02.875 { 00:18:02.875 "name": "pt1", 00:18:02.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.875 "is_configured": true, 00:18:02.875 "data_offset": 256, 00:18:02.875 "data_size": 7936 00:18:02.875 }, 00:18:02.875 { 00:18:02.875 "name": "pt2", 00:18:02.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.875 "is_configured": true, 00:18:02.875 "data_offset": 256, 00:18:02.875 "data_size": 7936 00:18:02.875 } 00:18:02.875 ] 00:18:02.875 } 00:18:02.875 } 00:18:02.875 }' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.875 pt2' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.875 17:52:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.875 [2024-11-20 17:52:30.028730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.875 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=003dfe4c-729d-4306-8073-7703e41854ce 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 003dfe4c-729d-4306-8073-7703e41854ce ']' 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 [2024-11-20 17:52:30.076369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.135 [2024-11-20 17:52:30.076425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.135 [2024-11-20 17:52:30.076510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.135 [2024-11-20 17:52:30.076593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.135 [2024-11-20 17:52:30.076642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.135 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.136 [2024-11-20 17:52:30.220141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:03.136 [2024-11-20 17:52:30.222215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:03.136 [2024-11-20 17:52:30.222301] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:03.136 [2024-11-20 17:52:30.222382] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:03.136 [2024-11-20 17:52:30.222436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.136 [2024-11-20 17:52:30.222466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:03.136 request: 00:18:03.136 { 00:18:03.136 "name": "raid_bdev1", 00:18:03.136 "raid_level": "raid1", 00:18:03.136 "base_bdevs": [ 00:18:03.136 "malloc1", 00:18:03.136 "malloc2" 00:18:03.136 ], 00:18:03.136 "superblock": false, 00:18:03.136 "method": "bdev_raid_create", 00:18:03.136 "req_id": 1 00:18:03.136 } 00:18:03.136 Got JSON-RPC error response 00:18:03.136 response: 00:18:03.136 { 00:18:03.136 "code": -17, 00:18:03.136 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:03.136 } 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.136 [2024-11-20 17:52:30.284051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.136 [2024-11-20 17:52:30.284097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.136 [2024-11-20 17:52:30.284115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:03.136 [2024-11-20 17:52:30.284127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.136 [2024-11-20 17:52:30.286556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.136 [2024-11-20 17:52:30.286596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.136 [2024-11-20 17:52:30.286661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:03.136 [2024-11-20 17:52:30.286721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.136 pt1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.136 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.137 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.137 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.137 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.396 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.396 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.396 "name": "raid_bdev1", 00:18:03.396 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:03.396 "strip_size_kb": 0, 00:18:03.396 "state": "configuring", 00:18:03.396 "raid_level": "raid1", 00:18:03.396 "superblock": true, 00:18:03.396 "num_base_bdevs": 2, 00:18:03.396 "num_base_bdevs_discovered": 1, 00:18:03.396 "num_base_bdevs_operational": 2, 00:18:03.396 "base_bdevs_list": [ 00:18:03.396 { 00:18:03.396 "name": "pt1", 00:18:03.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.396 "is_configured": true, 00:18:03.396 "data_offset": 256, 00:18:03.396 "data_size": 7936 00:18:03.396 }, 00:18:03.396 { 00:18:03.396 "name": null, 00:18:03.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.396 "is_configured": false, 00:18:03.396 "data_offset": 256, 00:18:03.396 "data_size": 7936 00:18:03.396 } 00:18:03.396 ] 00:18:03.396 }' 00:18:03.396 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.396 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.656 [2024-11-20 17:52:30.719265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.656 [2024-11-20 17:52:30.719359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.656 [2024-11-20 17:52:30.719394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:03.656 [2024-11-20 17:52:30.719422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.656 [2024-11-20 17:52:30.719839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.656 [2024-11-20 17:52:30.719896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.656 [2024-11-20 17:52:30.719979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.656 [2024-11-20 17:52:30.720039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.656 [2024-11-20 17:52:30.720185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:03.656 [2024-11-20 17:52:30.720224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.656 [2024-11-20 17:52:30.720484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:03.656 [2024-11-20 17:52:30.720657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:03.656 [2024-11-20 17:52:30.720694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:03.656 [2024-11-20 17:52:30.720853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.656 pt2 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.656 "name": "raid_bdev1", 00:18:03.656 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:03.656 "strip_size_kb": 0, 00:18:03.656 "state": "online", 00:18:03.656 "raid_level": "raid1", 00:18:03.656 "superblock": true, 00:18:03.656 "num_base_bdevs": 2, 00:18:03.656 "num_base_bdevs_discovered": 2, 00:18:03.656 "num_base_bdevs_operational": 2, 00:18:03.656 "base_bdevs_list": [ 00:18:03.656 { 00:18:03.656 "name": "pt1", 00:18:03.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.656 "is_configured": true, 00:18:03.656 "data_offset": 256, 00:18:03.656 "data_size": 7936 00:18:03.656 }, 00:18:03.656 { 00:18:03.656 "name": "pt2", 00:18:03.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.656 "is_configured": true, 00:18:03.656 "data_offset": 256, 00:18:03.656 "data_size": 7936 00:18:03.656 } 00:18:03.656 ] 00:18:03.656 }' 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.656 17:52:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.225 [2024-11-20 17:52:31.186694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.225 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.225 "name": "raid_bdev1", 00:18:04.225 "aliases": [ 00:18:04.225 "003dfe4c-729d-4306-8073-7703e41854ce" 00:18:04.225 ], 00:18:04.225 "product_name": "Raid Volume", 00:18:04.225 "block_size": 4096, 00:18:04.225 "num_blocks": 7936, 00:18:04.225 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:04.225 "assigned_rate_limits": { 00:18:04.225 "rw_ios_per_sec": 0, 00:18:04.225 "rw_mbytes_per_sec": 0, 00:18:04.225 "r_mbytes_per_sec": 0, 00:18:04.225 "w_mbytes_per_sec": 0 00:18:04.225 }, 00:18:04.225 "claimed": false, 00:18:04.225 "zoned": false, 00:18:04.225 "supported_io_types": { 00:18:04.225 "read": true, 00:18:04.225 "write": true, 00:18:04.225 "unmap": false, 00:18:04.225 "flush": false, 00:18:04.225 "reset": true, 00:18:04.225 "nvme_admin": false, 00:18:04.225 "nvme_io": false, 00:18:04.225 "nvme_io_md": false, 00:18:04.225 "write_zeroes": true, 00:18:04.225 "zcopy": false, 00:18:04.225 "get_zone_info": false, 00:18:04.225 "zone_management": false, 00:18:04.226 "zone_append": false, 00:18:04.226 "compare": false, 00:18:04.226 "compare_and_write": false, 00:18:04.226 "abort": false, 00:18:04.226 "seek_hole": false, 00:18:04.226 "seek_data": false, 00:18:04.226 "copy": false, 00:18:04.226 "nvme_iov_md": false 00:18:04.226 }, 00:18:04.226 "memory_domains": [ 00:18:04.226 { 00:18:04.226 "dma_device_id": "system", 00:18:04.226 "dma_device_type": 1 00:18:04.226 }, 00:18:04.226 { 00:18:04.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.226 "dma_device_type": 2 00:18:04.226 }, 00:18:04.226 { 00:18:04.226 "dma_device_id": "system", 00:18:04.226 "dma_device_type": 1 00:18:04.226 }, 00:18:04.226 { 00:18:04.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.226 "dma_device_type": 2 00:18:04.226 } 00:18:04.226 ], 00:18:04.226 "driver_specific": { 00:18:04.226 "raid": { 00:18:04.226 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:04.226 "strip_size_kb": 0, 00:18:04.226 "state": "online", 00:18:04.226 "raid_level": "raid1", 00:18:04.226 "superblock": true, 00:18:04.226 "num_base_bdevs": 2, 00:18:04.226 "num_base_bdevs_discovered": 2, 00:18:04.226 "num_base_bdevs_operational": 2, 00:18:04.226 "base_bdevs_list": [ 00:18:04.226 { 00:18:04.226 "name": "pt1", 00:18:04.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.226 "is_configured": true, 00:18:04.226 "data_offset": 256, 00:18:04.226 "data_size": 7936 00:18:04.226 }, 00:18:04.226 { 00:18:04.226 "name": "pt2", 00:18:04.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.226 "is_configured": true, 00:18:04.226 "data_offset": 256, 00:18:04.226 "data_size": 7936 00:18:04.226 } 00:18:04.226 ] 00:18:04.226 } 00:18:04.226 } 00:18:04.226 }' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:04.226 pt2' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.226 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 [2024-11-20 17:52:31.434272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 003dfe4c-729d-4306-8073-7703e41854ce '!=' 003dfe4c-729d-4306-8073-7703e41854ce ']' 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 [2024-11-20 17:52:31.478002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.487 "name": "raid_bdev1", 00:18:04.487 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:04.487 "strip_size_kb": 0, 00:18:04.487 "state": "online", 00:18:04.487 "raid_level": "raid1", 00:18:04.487 "superblock": true, 00:18:04.487 "num_base_bdevs": 2, 00:18:04.487 "num_base_bdevs_discovered": 1, 00:18:04.487 "num_base_bdevs_operational": 1, 00:18:04.487 "base_bdevs_list": [ 00:18:04.487 { 00:18:04.487 "name": null, 00:18:04.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.487 "is_configured": false, 00:18:04.487 "data_offset": 0, 00:18:04.487 "data_size": 7936 00:18:04.487 }, 00:18:04.487 { 00:18:04.487 "name": "pt2", 00:18:04.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.487 "is_configured": true, 00:18:04.487 "data_offset": 256, 00:18:04.487 "data_size": 7936 00:18:04.487 } 00:18:04.487 ] 00:18:04.487 }' 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.487 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.747 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.747 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.747 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.007 [2024-11-20 17:52:31.925196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.007 [2024-11-20 17:52:31.925259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.007 [2024-11-20 17:52:31.925330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.007 [2024-11-20 17:52:31.925382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.007 [2024-11-20 17:52:31.925442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:05.007 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.008 [2024-11-20 17:52:31.989098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.008 [2024-11-20 17:52:31.989184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.008 [2024-11-20 17:52:31.989213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:05.008 [2024-11-20 17:52:31.989241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.008 [2024-11-20 17:52:31.991678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.008 [2024-11-20 17:52:31.991750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.008 [2024-11-20 17:52:31.991835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:05.008 [2024-11-20 17:52:31.991880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.008 [2024-11-20 17:52:31.991981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:05.008 [2024-11-20 17:52:31.991993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.008 [2024-11-20 17:52:31.992229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:05.008 [2024-11-20 17:52:31.992377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:05.008 [2024-11-20 17:52:31.992386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:05.008 [2024-11-20 17:52:31.992511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.008 pt2 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.008 17:52:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.008 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.008 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.008 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.008 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.008 "name": "raid_bdev1", 00:18:05.008 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:05.008 "strip_size_kb": 0, 00:18:05.008 "state": "online", 00:18:05.008 "raid_level": "raid1", 00:18:05.008 "superblock": true, 00:18:05.008 "num_base_bdevs": 2, 00:18:05.008 "num_base_bdevs_discovered": 1, 00:18:05.008 "num_base_bdevs_operational": 1, 00:18:05.008 "base_bdevs_list": [ 00:18:05.008 { 00:18:05.008 "name": null, 00:18:05.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.008 "is_configured": false, 00:18:05.008 "data_offset": 256, 00:18:05.008 "data_size": 7936 00:18:05.008 }, 00:18:05.008 { 00:18:05.008 "name": "pt2", 00:18:05.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.008 "is_configured": true, 00:18:05.008 "data_offset": 256, 00:18:05.008 "data_size": 7936 00:18:05.008 } 00:18:05.008 ] 00:18:05.008 }' 00:18:05.008 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.008 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.268 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.268 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.268 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.268 [2024-11-20 17:52:32.432363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.268 [2024-11-20 17:52:32.432431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.268 [2024-11-20 17:52:32.432505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.268 [2024-11-20 17:52:32.432563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.268 [2024-11-20 17:52:32.432652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:05.268 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.268 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.268 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.529 [2024-11-20 17:52:32.492276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.529 [2024-11-20 17:52:32.492362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.529 [2024-11-20 17:52:32.492395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:05.529 [2024-11-20 17:52:32.492421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.529 [2024-11-20 17:52:32.494871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.529 [2024-11-20 17:52:32.494942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.529 [2024-11-20 17:52:32.495058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:05.529 [2024-11-20 17:52:32.495127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.529 [2024-11-20 17:52:32.495307] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:05.529 [2024-11-20 17:52:32.495361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.529 [2024-11-20 17:52:32.495397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:05.529 [2024-11-20 17:52:32.495499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.529 [2024-11-20 17:52:32.495595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:05.529 [2024-11-20 17:52:32.495630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.529 [2024-11-20 17:52:32.495887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:05.529 [2024-11-20 17:52:32.496080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:05.529 [2024-11-20 17:52:32.496126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:05.529 [2024-11-20 17:52:32.496350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.529 pt1 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.529 "name": "raid_bdev1", 00:18:05.529 "uuid": "003dfe4c-729d-4306-8073-7703e41854ce", 00:18:05.529 "strip_size_kb": 0, 00:18:05.529 "state": "online", 00:18:05.529 "raid_level": "raid1", 00:18:05.529 "superblock": true, 00:18:05.529 "num_base_bdevs": 2, 00:18:05.529 "num_base_bdevs_discovered": 1, 00:18:05.529 "num_base_bdevs_operational": 1, 00:18:05.529 "base_bdevs_list": [ 00:18:05.529 { 00:18:05.529 "name": null, 00:18:05.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.529 "is_configured": false, 00:18:05.529 "data_offset": 256, 00:18:05.529 "data_size": 7936 00:18:05.529 }, 00:18:05.529 { 00:18:05.529 "name": "pt2", 00:18:05.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.529 "is_configured": true, 00:18:05.529 "data_offset": 256, 00:18:05.529 "data_size": 7936 00:18:05.529 } 00:18:05.529 ] 00:18:05.529 }' 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.529 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.789 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:05.789 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:05.789 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.789 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.789 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.049 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:06.049 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.049 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.049 17:52:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:06.049 17:52:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.049 [2024-11-20 17:52:32.983757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.049 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.049 17:52:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 003dfe4c-729d-4306-8073-7703e41854ce '!=' 003dfe4c-729d-4306-8073-7703e41854ce ']' 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86649 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86649 ']' 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86649 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86649 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.050 killing process with pid 86649 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86649' 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86649 00:18:06.050 [2024-11-20 17:52:33.059399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.050 [2024-11-20 17:52:33.059555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.050 [2024-11-20 17:52:33.059612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.050 [2024-11-20 17:52:33.059630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:06.050 17:52:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86649 00:18:06.309 [2024-11-20 17:52:33.270318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.719 17:52:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:07.719 00:18:07.719 real 0m6.197s 00:18:07.719 user 0m9.238s 00:18:07.719 sys 0m1.247s 00:18:07.719 17:52:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.719 17:52:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.719 ************************************ 00:18:07.719 END TEST raid_superblock_test_4k 00:18:07.719 ************************************ 00:18:07.719 17:52:34 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:07.719 17:52:34 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:07.719 17:52:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:07.719 17:52:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.719 17:52:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.719 ************************************ 00:18:07.719 START TEST raid_rebuild_test_sb_4k 00:18:07.719 ************************************ 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86976 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86976 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86976 ']' 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.719 17:52:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.719 [2024-11-20 17:52:34.638552] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:07.719 [2024-11-20 17:52:34.638753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:07.719 Zero copy mechanism will not be used. 00:18:07.719 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86976 ] 00:18:07.719 [2024-11-20 17:52:34.805691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.996 [2024-11-20 17:52:34.932472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.996 [2024-11-20 17:52:35.166894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.996 [2024-11-20 17:52:35.166952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 BaseBdev1_malloc 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 [2024-11-20 17:52:35.500981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.566 [2024-11-20 17:52:35.501078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.566 [2024-11-20 17:52:35.501117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:08.566 [2024-11-20 17:52:35.501132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.566 [2024-11-20 17:52:35.503584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.566 [2024-11-20 17:52:35.503624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.566 BaseBdev1 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 BaseBdev2_malloc 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 [2024-11-20 17:52:35.561406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:08.566 [2024-11-20 17:52:35.561483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.566 [2024-11-20 17:52:35.561508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:08.566 [2024-11-20 17:52:35.561520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.566 [2024-11-20 17:52:35.563889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.566 [2024-11-20 17:52:35.563928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:08.566 BaseBdev2 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 spare_malloc 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 spare_delay 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 [2024-11-20 17:52:35.666232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.566 [2024-11-20 17:52:35.666293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.566 [2024-11-20 17:52:35.666312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:08.566 [2024-11-20 17:52:35.666324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.566 [2024-11-20 17:52:35.668734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.566 [2024-11-20 17:52:35.668776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.566 spare 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.566 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.566 [2024-11-20 17:52:35.678279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.567 [2024-11-20 17:52:35.680369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.567 [2024-11-20 17:52:35.680626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.567 [2024-11-20 17:52:35.680681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:08.567 [2024-11-20 17:52:35.680948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:08.567 [2024-11-20 17:52:35.681200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.567 [2024-11-20 17:52:35.681249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:08.567 [2024-11-20 17:52:35.681482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.567 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.827 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.827 "name": "raid_bdev1", 00:18:08.827 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:08.827 "strip_size_kb": 0, 00:18:08.827 "state": "online", 00:18:08.827 "raid_level": "raid1", 00:18:08.827 "superblock": true, 00:18:08.827 "num_base_bdevs": 2, 00:18:08.827 "num_base_bdevs_discovered": 2, 00:18:08.827 "num_base_bdevs_operational": 2, 00:18:08.827 "base_bdevs_list": [ 00:18:08.827 { 00:18:08.827 "name": "BaseBdev1", 00:18:08.827 "uuid": "792f11ef-badc-59c3-8f28-9f75308800b1", 00:18:08.827 "is_configured": true, 00:18:08.827 "data_offset": 256, 00:18:08.827 "data_size": 7936 00:18:08.827 }, 00:18:08.827 { 00:18:08.827 "name": "BaseBdev2", 00:18:08.827 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:08.827 "is_configured": true, 00:18:08.827 "data_offset": 256, 00:18:08.827 "data_size": 7936 00:18:08.827 } 00:18:08.827 ] 00:18:08.827 }' 00:18:08.827 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.827 17:52:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.087 [2024-11-20 17:52:36.141682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:09.087 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:09.088 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:09.348 [2024-11-20 17:52:36.413046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.348 /dev/nbd0 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.348 1+0 records in 00:18:09.348 1+0 records out 00:18:09.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041448 s, 9.9 MB/s 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.348 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:09.349 17:52:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:09.918 7936+0 records in 00:18:09.918 7936+0 records out 00:18:09.918 32505856 bytes (33 MB, 31 MiB) copied, 0.59075 s, 55.0 MB/s 00:18:09.918 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:09.918 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.918 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:09.918 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:10.179 [2024-11-20 17:52:37.295129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.179 [2024-11-20 17:52:37.326042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.179 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.441 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.441 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.441 "name": "raid_bdev1", 00:18:10.441 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:10.441 "strip_size_kb": 0, 00:18:10.441 "state": "online", 00:18:10.441 "raid_level": "raid1", 00:18:10.441 "superblock": true, 00:18:10.441 "num_base_bdevs": 2, 00:18:10.441 "num_base_bdevs_discovered": 1, 00:18:10.441 "num_base_bdevs_operational": 1, 00:18:10.441 "base_bdevs_list": [ 00:18:10.441 { 00:18:10.441 "name": null, 00:18:10.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.441 "is_configured": false, 00:18:10.441 "data_offset": 0, 00:18:10.441 "data_size": 7936 00:18:10.441 }, 00:18:10.441 { 00:18:10.441 "name": "BaseBdev2", 00:18:10.441 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:10.441 "is_configured": true, 00:18:10.441 "data_offset": 256, 00:18:10.441 "data_size": 7936 00:18:10.441 } 00:18:10.441 ] 00:18:10.441 }' 00:18:10.441 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.441 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.702 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.702 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.702 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.702 [2024-11-20 17:52:37.773250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.702 [2024-11-20 17:52:37.791949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:10.702 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.702 17:52:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:10.702 [2024-11-20 17:52:37.794097] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.643 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.903 "name": "raid_bdev1", 00:18:11.903 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:11.903 "strip_size_kb": 0, 00:18:11.903 "state": "online", 00:18:11.903 "raid_level": "raid1", 00:18:11.903 "superblock": true, 00:18:11.903 "num_base_bdevs": 2, 00:18:11.903 "num_base_bdevs_discovered": 2, 00:18:11.903 "num_base_bdevs_operational": 2, 00:18:11.903 "process": { 00:18:11.903 "type": "rebuild", 00:18:11.903 "target": "spare", 00:18:11.903 "progress": { 00:18:11.903 "blocks": 2560, 00:18:11.903 "percent": 32 00:18:11.903 } 00:18:11.903 }, 00:18:11.903 "base_bdevs_list": [ 00:18:11.903 { 00:18:11.903 "name": "spare", 00:18:11.903 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:11.903 "is_configured": true, 00:18:11.903 "data_offset": 256, 00:18:11.903 "data_size": 7936 00:18:11.903 }, 00:18:11.903 { 00:18:11.903 "name": "BaseBdev2", 00:18:11.903 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:11.903 "is_configured": true, 00:18:11.903 "data_offset": 256, 00:18:11.903 "data_size": 7936 00:18:11.903 } 00:18:11.903 ] 00:18:11.903 }' 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.903 17:52:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.903 [2024-11-20 17:52:38.954912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.903 [2024-11-20 17:52:39.002832] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.903 [2024-11-20 17:52:39.002892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.903 [2024-11-20 17:52:39.002907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.903 [2024-11-20 17:52:39.002917] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.903 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.164 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.164 "name": "raid_bdev1", 00:18:12.164 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:12.164 "strip_size_kb": 0, 00:18:12.164 "state": "online", 00:18:12.164 "raid_level": "raid1", 00:18:12.164 "superblock": true, 00:18:12.164 "num_base_bdevs": 2, 00:18:12.164 "num_base_bdevs_discovered": 1, 00:18:12.164 "num_base_bdevs_operational": 1, 00:18:12.164 "base_bdevs_list": [ 00:18:12.164 { 00:18:12.164 "name": null, 00:18:12.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.164 "is_configured": false, 00:18:12.164 "data_offset": 0, 00:18:12.164 "data_size": 7936 00:18:12.164 }, 00:18:12.164 { 00:18:12.164 "name": "BaseBdev2", 00:18:12.164 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:12.164 "is_configured": true, 00:18:12.164 "data_offset": 256, 00:18:12.164 "data_size": 7936 00:18:12.164 } 00:18:12.164 ] 00:18:12.164 }' 00:18:12.164 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.164 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.426 "name": "raid_bdev1", 00:18:12.426 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:12.426 "strip_size_kb": 0, 00:18:12.426 "state": "online", 00:18:12.426 "raid_level": "raid1", 00:18:12.426 "superblock": true, 00:18:12.426 "num_base_bdevs": 2, 00:18:12.426 "num_base_bdevs_discovered": 1, 00:18:12.426 "num_base_bdevs_operational": 1, 00:18:12.426 "base_bdevs_list": [ 00:18:12.426 { 00:18:12.426 "name": null, 00:18:12.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.426 "is_configured": false, 00:18:12.426 "data_offset": 0, 00:18:12.426 "data_size": 7936 00:18:12.426 }, 00:18:12.426 { 00:18:12.426 "name": "BaseBdev2", 00:18:12.426 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:12.426 "is_configured": true, 00:18:12.426 "data_offset": 256, 00:18:12.426 "data_size": 7936 00:18:12.426 } 00:18:12.426 ] 00:18:12.426 }' 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.426 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.426 [2024-11-20 17:52:39.582328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.426 [2024-11-20 17:52:39.599298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:12.686 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.686 17:52:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:12.686 [2024-11-20 17:52:39.601516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.627 "name": "raid_bdev1", 00:18:13.627 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:13.627 "strip_size_kb": 0, 00:18:13.627 "state": "online", 00:18:13.627 "raid_level": "raid1", 00:18:13.627 "superblock": true, 00:18:13.627 "num_base_bdevs": 2, 00:18:13.627 "num_base_bdevs_discovered": 2, 00:18:13.627 "num_base_bdevs_operational": 2, 00:18:13.627 "process": { 00:18:13.627 "type": "rebuild", 00:18:13.627 "target": "spare", 00:18:13.627 "progress": { 00:18:13.627 "blocks": 2560, 00:18:13.627 "percent": 32 00:18:13.627 } 00:18:13.627 }, 00:18:13.627 "base_bdevs_list": [ 00:18:13.627 { 00:18:13.627 "name": "spare", 00:18:13.627 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:13.627 "is_configured": true, 00:18:13.627 "data_offset": 256, 00:18:13.627 "data_size": 7936 00:18:13.627 }, 00:18:13.627 { 00:18:13.627 "name": "BaseBdev2", 00:18:13.627 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:13.627 "is_configured": true, 00:18:13.627 "data_offset": 256, 00:18:13.627 "data_size": 7936 00:18:13.627 } 00:18:13.627 ] 00:18:13.627 }' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:13.627 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.627 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.887 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.887 "name": "raid_bdev1", 00:18:13.887 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:13.887 "strip_size_kb": 0, 00:18:13.887 "state": "online", 00:18:13.887 "raid_level": "raid1", 00:18:13.887 "superblock": true, 00:18:13.887 "num_base_bdevs": 2, 00:18:13.887 "num_base_bdevs_discovered": 2, 00:18:13.887 "num_base_bdevs_operational": 2, 00:18:13.887 "process": { 00:18:13.887 "type": "rebuild", 00:18:13.887 "target": "spare", 00:18:13.887 "progress": { 00:18:13.887 "blocks": 2816, 00:18:13.887 "percent": 35 00:18:13.887 } 00:18:13.887 }, 00:18:13.887 "base_bdevs_list": [ 00:18:13.887 { 00:18:13.887 "name": "spare", 00:18:13.887 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:13.887 "is_configured": true, 00:18:13.887 "data_offset": 256, 00:18:13.887 "data_size": 7936 00:18:13.887 }, 00:18:13.887 { 00:18:13.887 "name": "BaseBdev2", 00:18:13.887 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:13.887 "is_configured": true, 00:18:13.887 "data_offset": 256, 00:18:13.887 "data_size": 7936 00:18:13.887 } 00:18:13.887 ] 00:18:13.887 }' 00:18:13.887 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.887 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.887 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.887 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.887 17:52:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.828 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.828 "name": "raid_bdev1", 00:18:14.828 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:14.828 "strip_size_kb": 0, 00:18:14.828 "state": "online", 00:18:14.828 "raid_level": "raid1", 00:18:14.828 "superblock": true, 00:18:14.828 "num_base_bdevs": 2, 00:18:14.829 "num_base_bdevs_discovered": 2, 00:18:14.829 "num_base_bdevs_operational": 2, 00:18:14.829 "process": { 00:18:14.829 "type": "rebuild", 00:18:14.829 "target": "spare", 00:18:14.829 "progress": { 00:18:14.829 "blocks": 5632, 00:18:14.829 "percent": 70 00:18:14.829 } 00:18:14.829 }, 00:18:14.829 "base_bdevs_list": [ 00:18:14.829 { 00:18:14.829 "name": "spare", 00:18:14.829 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:14.829 "is_configured": true, 00:18:14.829 "data_offset": 256, 00:18:14.829 "data_size": 7936 00:18:14.829 }, 00:18:14.829 { 00:18:14.829 "name": "BaseBdev2", 00:18:14.829 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:14.829 "is_configured": true, 00:18:14.829 "data_offset": 256, 00:18:14.829 "data_size": 7936 00:18:14.829 } 00:18:14.829 ] 00:18:14.829 }' 00:18:14.829 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.829 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.829 17:52:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.089 17:52:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.089 17:52:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.660 [2024-11-20 17:52:42.723204] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:15.660 [2024-11-20 17:52:42.723273] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:15.660 [2024-11-20 17:52:42.723379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.920 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.180 "name": "raid_bdev1", 00:18:16.180 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:16.180 "strip_size_kb": 0, 00:18:16.180 "state": "online", 00:18:16.180 "raid_level": "raid1", 00:18:16.180 "superblock": true, 00:18:16.180 "num_base_bdevs": 2, 00:18:16.180 "num_base_bdevs_discovered": 2, 00:18:16.180 "num_base_bdevs_operational": 2, 00:18:16.180 "base_bdevs_list": [ 00:18:16.180 { 00:18:16.180 "name": "spare", 00:18:16.180 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:16.180 "is_configured": true, 00:18:16.180 "data_offset": 256, 00:18:16.180 "data_size": 7936 00:18:16.180 }, 00:18:16.180 { 00:18:16.180 "name": "BaseBdev2", 00:18:16.180 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:16.180 "is_configured": true, 00:18:16.180 "data_offset": 256, 00:18:16.180 "data_size": 7936 00:18:16.180 } 00:18:16.180 ] 00:18:16.180 }' 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.180 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.181 "name": "raid_bdev1", 00:18:16.181 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:16.181 "strip_size_kb": 0, 00:18:16.181 "state": "online", 00:18:16.181 "raid_level": "raid1", 00:18:16.181 "superblock": true, 00:18:16.181 "num_base_bdevs": 2, 00:18:16.181 "num_base_bdevs_discovered": 2, 00:18:16.181 "num_base_bdevs_operational": 2, 00:18:16.181 "base_bdevs_list": [ 00:18:16.181 { 00:18:16.181 "name": "spare", 00:18:16.181 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:16.181 "is_configured": true, 00:18:16.181 "data_offset": 256, 00:18:16.181 "data_size": 7936 00:18:16.181 }, 00:18:16.181 { 00:18:16.181 "name": "BaseBdev2", 00:18:16.181 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:16.181 "is_configured": true, 00:18:16.181 "data_offset": 256, 00:18:16.181 "data_size": 7936 00:18:16.181 } 00:18:16.181 ] 00:18:16.181 }' 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.181 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.440 "name": "raid_bdev1", 00:18:16.440 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:16.440 "strip_size_kb": 0, 00:18:16.440 "state": "online", 00:18:16.440 "raid_level": "raid1", 00:18:16.440 "superblock": true, 00:18:16.440 "num_base_bdevs": 2, 00:18:16.440 "num_base_bdevs_discovered": 2, 00:18:16.440 "num_base_bdevs_operational": 2, 00:18:16.440 "base_bdevs_list": [ 00:18:16.440 { 00:18:16.440 "name": "spare", 00:18:16.440 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:16.440 "is_configured": true, 00:18:16.440 "data_offset": 256, 00:18:16.440 "data_size": 7936 00:18:16.440 }, 00:18:16.440 { 00:18:16.440 "name": "BaseBdev2", 00:18:16.440 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:16.440 "is_configured": true, 00:18:16.440 "data_offset": 256, 00:18:16.440 "data_size": 7936 00:18:16.440 } 00:18:16.440 ] 00:18:16.440 }' 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.440 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.700 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:16.700 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.700 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.700 [2024-11-20 17:52:43.800102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.700 [2024-11-20 17:52:43.800136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.700 [2024-11-20 17:52:43.800220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.700 [2024-11-20 17:52:43.800290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.700 [2024-11-20 17:52:43.800301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:16.700 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:16.701 17:52:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:16.961 /dev/nbd0 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.961 1+0 records in 00:18:16.961 1+0 records out 00:18:16.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449689 s, 9.1 MB/s 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:16.961 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:17.222 /dev/nbd1 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.222 1+0 records in 00:18:17.222 1+0 records out 00:18:17.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369496 s, 11.1 MB/s 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.222 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.482 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.742 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.002 17:52:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.002 [2024-11-20 17:52:44.996333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.002 [2024-11-20 17:52:44.996395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.002 [2024-11-20 17:52:44.996440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:18.002 [2024-11-20 17:52:44.996450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.002 [2024-11-20 17:52:44.999006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.003 [2024-11-20 17:52:44.999060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.003 [2024-11-20 17:52:44.999166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:18.003 [2024-11-20 17:52:44.999226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.003 [2024-11-20 17:52:44.999408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.003 spare 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.003 [2024-11-20 17:52:45.099336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:18.003 [2024-11-20 17:52:45.099364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:18.003 [2024-11-20 17:52:45.099626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:18.003 [2024-11-20 17:52:45.099802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:18.003 [2024-11-20 17:52:45.099812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:18.003 [2024-11-20 17:52:45.099974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.003 "name": "raid_bdev1", 00:18:18.003 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:18.003 "strip_size_kb": 0, 00:18:18.003 "state": "online", 00:18:18.003 "raid_level": "raid1", 00:18:18.003 "superblock": true, 00:18:18.003 "num_base_bdevs": 2, 00:18:18.003 "num_base_bdevs_discovered": 2, 00:18:18.003 "num_base_bdevs_operational": 2, 00:18:18.003 "base_bdevs_list": [ 00:18:18.003 { 00:18:18.003 "name": "spare", 00:18:18.003 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:18.003 "is_configured": true, 00:18:18.003 "data_offset": 256, 00:18:18.003 "data_size": 7936 00:18:18.003 }, 00:18:18.003 { 00:18:18.003 "name": "BaseBdev2", 00:18:18.003 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:18.003 "is_configured": true, 00:18:18.003 "data_offset": 256, 00:18:18.003 "data_size": 7936 00:18:18.003 } 00:18:18.003 ] 00:18:18.003 }' 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.003 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.572 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.573 "name": "raid_bdev1", 00:18:18.573 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:18.573 "strip_size_kb": 0, 00:18:18.573 "state": "online", 00:18:18.573 "raid_level": "raid1", 00:18:18.573 "superblock": true, 00:18:18.573 "num_base_bdevs": 2, 00:18:18.573 "num_base_bdevs_discovered": 2, 00:18:18.573 "num_base_bdevs_operational": 2, 00:18:18.573 "base_bdevs_list": [ 00:18:18.573 { 00:18:18.573 "name": "spare", 00:18:18.573 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:18.573 "is_configured": true, 00:18:18.573 "data_offset": 256, 00:18:18.573 "data_size": 7936 00:18:18.573 }, 00:18:18.573 { 00:18:18.573 "name": "BaseBdev2", 00:18:18.573 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:18.573 "is_configured": true, 00:18:18.573 "data_offset": 256, 00:18:18.573 "data_size": 7936 00:18:18.573 } 00:18:18.573 ] 00:18:18.573 }' 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.573 [2024-11-20 17:52:45.719118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.573 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.833 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.833 "name": "raid_bdev1", 00:18:18.833 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:18.833 "strip_size_kb": 0, 00:18:18.833 "state": "online", 00:18:18.833 "raid_level": "raid1", 00:18:18.833 "superblock": true, 00:18:18.833 "num_base_bdevs": 2, 00:18:18.833 "num_base_bdevs_discovered": 1, 00:18:18.833 "num_base_bdevs_operational": 1, 00:18:18.833 "base_bdevs_list": [ 00:18:18.833 { 00:18:18.833 "name": null, 00:18:18.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.833 "is_configured": false, 00:18:18.833 "data_offset": 0, 00:18:18.833 "data_size": 7936 00:18:18.833 }, 00:18:18.833 { 00:18:18.833 "name": "BaseBdev2", 00:18:18.833 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:18.833 "is_configured": true, 00:18:18.833 "data_offset": 256, 00:18:18.833 "data_size": 7936 00:18:18.833 } 00:18:18.833 ] 00:18:18.833 }' 00:18:18.833 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.833 17:52:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.093 17:52:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.093 17:52:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.093 17:52:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.093 [2024-11-20 17:52:46.150425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.093 [2024-11-20 17:52:46.150626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.093 [2024-11-20 17:52:46.150695] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:19.093 [2024-11-20 17:52:46.150749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.093 [2024-11-20 17:52:46.168384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:19.093 17:52:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.093 17:52:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:19.093 [2024-11-20 17:52:46.170574] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.034 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.294 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.294 "name": "raid_bdev1", 00:18:20.294 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:20.294 "strip_size_kb": 0, 00:18:20.294 "state": "online", 00:18:20.294 "raid_level": "raid1", 00:18:20.294 "superblock": true, 00:18:20.294 "num_base_bdevs": 2, 00:18:20.295 "num_base_bdevs_discovered": 2, 00:18:20.295 "num_base_bdevs_operational": 2, 00:18:20.295 "process": { 00:18:20.295 "type": "rebuild", 00:18:20.295 "target": "spare", 00:18:20.295 "progress": { 00:18:20.295 "blocks": 2560, 00:18:20.295 "percent": 32 00:18:20.295 } 00:18:20.295 }, 00:18:20.295 "base_bdevs_list": [ 00:18:20.295 { 00:18:20.295 "name": "spare", 00:18:20.295 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:20.295 "is_configured": true, 00:18:20.295 "data_offset": 256, 00:18:20.295 "data_size": 7936 00:18:20.295 }, 00:18:20.295 { 00:18:20.295 "name": "BaseBdev2", 00:18:20.295 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:20.295 "is_configured": true, 00:18:20.295 "data_offset": 256, 00:18:20.295 "data_size": 7936 00:18:20.295 } 00:18:20.295 ] 00:18:20.295 }' 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.295 [2024-11-20 17:52:47.337429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.295 [2024-11-20 17:52:47.378916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.295 [2024-11-20 17:52:47.378976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.295 [2024-11-20 17:52:47.378991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.295 [2024-11-20 17:52:47.379001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.295 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.555 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.555 "name": "raid_bdev1", 00:18:20.555 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:20.555 "strip_size_kb": 0, 00:18:20.555 "state": "online", 00:18:20.555 "raid_level": "raid1", 00:18:20.555 "superblock": true, 00:18:20.555 "num_base_bdevs": 2, 00:18:20.555 "num_base_bdevs_discovered": 1, 00:18:20.555 "num_base_bdevs_operational": 1, 00:18:20.555 "base_bdevs_list": [ 00:18:20.555 { 00:18:20.555 "name": null, 00:18:20.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.555 "is_configured": false, 00:18:20.555 "data_offset": 0, 00:18:20.555 "data_size": 7936 00:18:20.555 }, 00:18:20.555 { 00:18:20.555 "name": "BaseBdev2", 00:18:20.555 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:20.555 "is_configured": true, 00:18:20.555 "data_offset": 256, 00:18:20.555 "data_size": 7936 00:18:20.555 } 00:18:20.555 ] 00:18:20.555 }' 00:18:20.555 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.555 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.816 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.816 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.816 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.816 [2024-11-20 17:52:47.878295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.816 [2024-11-20 17:52:47.878406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.816 [2024-11-20 17:52:47.878450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:20.816 [2024-11-20 17:52:47.878493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.816 [2024-11-20 17:52:47.879105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.816 [2024-11-20 17:52:47.879184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.816 [2024-11-20 17:52:47.879325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:20.816 [2024-11-20 17:52:47.879371] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.816 [2024-11-20 17:52:47.879418] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.816 [2024-11-20 17:52:47.879471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.816 [2024-11-20 17:52:47.897590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:20.816 spare 00:18:20.816 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.816 17:52:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:20.816 [2024-11-20 17:52:47.899775] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.756 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.757 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.757 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.017 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.017 "name": "raid_bdev1", 00:18:22.017 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:22.017 "strip_size_kb": 0, 00:18:22.017 "state": "online", 00:18:22.017 "raid_level": "raid1", 00:18:22.017 "superblock": true, 00:18:22.017 "num_base_bdevs": 2, 00:18:22.017 "num_base_bdevs_discovered": 2, 00:18:22.017 "num_base_bdevs_operational": 2, 00:18:22.017 "process": { 00:18:22.017 "type": "rebuild", 00:18:22.017 "target": "spare", 00:18:22.017 "progress": { 00:18:22.017 "blocks": 2560, 00:18:22.017 "percent": 32 00:18:22.017 } 00:18:22.017 }, 00:18:22.017 "base_bdevs_list": [ 00:18:22.017 { 00:18:22.017 "name": "spare", 00:18:22.017 "uuid": "b002debb-a8e4-50ad-b3d5-bab118af5f5d", 00:18:22.017 "is_configured": true, 00:18:22.017 "data_offset": 256, 00:18:22.017 "data_size": 7936 00:18:22.017 }, 00:18:22.017 { 00:18:22.017 "name": "BaseBdev2", 00:18:22.017 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:22.017 "is_configured": true, 00:18:22.017 "data_offset": 256, 00:18:22.017 "data_size": 7936 00:18:22.017 } 00:18:22.017 ] 00:18:22.017 }' 00:18:22.017 17:52:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.017 [2024-11-20 17:52:49.062722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.017 [2024-11-20 17:52:49.108291] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.017 [2024-11-20 17:52:49.108344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.017 [2024-11-20 17:52:49.108361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.017 [2024-11-20 17:52:49.108369] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.017 "name": "raid_bdev1", 00:18:22.017 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:22.017 "strip_size_kb": 0, 00:18:22.017 "state": "online", 00:18:22.017 "raid_level": "raid1", 00:18:22.017 "superblock": true, 00:18:22.017 "num_base_bdevs": 2, 00:18:22.017 "num_base_bdevs_discovered": 1, 00:18:22.017 "num_base_bdevs_operational": 1, 00:18:22.017 "base_bdevs_list": [ 00:18:22.017 { 00:18:22.017 "name": null, 00:18:22.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.017 "is_configured": false, 00:18:22.017 "data_offset": 0, 00:18:22.017 "data_size": 7936 00:18:22.017 }, 00:18:22.017 { 00:18:22.017 "name": "BaseBdev2", 00:18:22.017 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:22.017 "is_configured": true, 00:18:22.017 "data_offset": 256, 00:18:22.017 "data_size": 7936 00:18:22.017 } 00:18:22.017 ] 00:18:22.017 }' 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.017 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.587 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.587 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.587 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.588 "name": "raid_bdev1", 00:18:22.588 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:22.588 "strip_size_kb": 0, 00:18:22.588 "state": "online", 00:18:22.588 "raid_level": "raid1", 00:18:22.588 "superblock": true, 00:18:22.588 "num_base_bdevs": 2, 00:18:22.588 "num_base_bdevs_discovered": 1, 00:18:22.588 "num_base_bdevs_operational": 1, 00:18:22.588 "base_bdevs_list": [ 00:18:22.588 { 00:18:22.588 "name": null, 00:18:22.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.588 "is_configured": false, 00:18:22.588 "data_offset": 0, 00:18:22.588 "data_size": 7936 00:18:22.588 }, 00:18:22.588 { 00:18:22.588 "name": "BaseBdev2", 00:18:22.588 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:22.588 "is_configured": true, 00:18:22.588 "data_offset": 256, 00:18:22.588 "data_size": 7936 00:18:22.588 } 00:18:22.588 ] 00:18:22.588 }' 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.588 [2024-11-20 17:52:49.718502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:22.588 [2024-11-20 17:52:49.718561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.588 [2024-11-20 17:52:49.718591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:22.588 [2024-11-20 17:52:49.718612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.588 [2024-11-20 17:52:49.719134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.588 [2024-11-20 17:52:49.719158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:22.588 [2024-11-20 17:52:49.719254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:22.588 [2024-11-20 17:52:49.719270] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.588 [2024-11-20 17:52:49.719283] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:22.588 [2024-11-20 17:52:49.719294] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:22.588 BaseBdev1 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.588 17:52:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.972 "name": "raid_bdev1", 00:18:23.972 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:23.972 "strip_size_kb": 0, 00:18:23.972 "state": "online", 00:18:23.972 "raid_level": "raid1", 00:18:23.972 "superblock": true, 00:18:23.972 "num_base_bdevs": 2, 00:18:23.972 "num_base_bdevs_discovered": 1, 00:18:23.972 "num_base_bdevs_operational": 1, 00:18:23.972 "base_bdevs_list": [ 00:18:23.972 { 00:18:23.972 "name": null, 00:18:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.972 "is_configured": false, 00:18:23.972 "data_offset": 0, 00:18:23.972 "data_size": 7936 00:18:23.972 }, 00:18:23.972 { 00:18:23.972 "name": "BaseBdev2", 00:18:23.972 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:23.972 "is_configured": true, 00:18:23.972 "data_offset": 256, 00:18:23.972 "data_size": 7936 00:18:23.972 } 00:18:23.972 ] 00:18:23.972 }' 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.972 17:52:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.232 "name": "raid_bdev1", 00:18:24.232 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:24.232 "strip_size_kb": 0, 00:18:24.232 "state": "online", 00:18:24.232 "raid_level": "raid1", 00:18:24.232 "superblock": true, 00:18:24.232 "num_base_bdevs": 2, 00:18:24.232 "num_base_bdevs_discovered": 1, 00:18:24.232 "num_base_bdevs_operational": 1, 00:18:24.232 "base_bdevs_list": [ 00:18:24.232 { 00:18:24.232 "name": null, 00:18:24.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.232 "is_configured": false, 00:18:24.232 "data_offset": 0, 00:18:24.232 "data_size": 7936 00:18:24.232 }, 00:18:24.232 { 00:18:24.232 "name": "BaseBdev2", 00:18:24.232 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:24.232 "is_configured": true, 00:18:24.232 "data_offset": 256, 00:18:24.232 "data_size": 7936 00:18:24.232 } 00:18:24.232 ] 00:18:24.232 }' 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.232 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.233 [2024-11-20 17:52:51.351768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.233 [2024-11-20 17:52:51.351909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.233 [2024-11-20 17:52:51.351928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:24.233 request: 00:18:24.233 { 00:18:24.233 "base_bdev": "BaseBdev1", 00:18:24.233 "raid_bdev": "raid_bdev1", 00:18:24.233 "method": "bdev_raid_add_base_bdev", 00:18:24.233 "req_id": 1 00:18:24.233 } 00:18:24.233 Got JSON-RPC error response 00:18:24.233 response: 00:18:24.233 { 00:18:24.233 "code": -22, 00:18:24.233 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:24.233 } 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.233 17:52:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.618 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.618 "name": "raid_bdev1", 00:18:25.618 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:25.618 "strip_size_kb": 0, 00:18:25.618 "state": "online", 00:18:25.618 "raid_level": "raid1", 00:18:25.619 "superblock": true, 00:18:25.619 "num_base_bdevs": 2, 00:18:25.619 "num_base_bdevs_discovered": 1, 00:18:25.619 "num_base_bdevs_operational": 1, 00:18:25.619 "base_bdevs_list": [ 00:18:25.619 { 00:18:25.619 "name": null, 00:18:25.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.619 "is_configured": false, 00:18:25.619 "data_offset": 0, 00:18:25.619 "data_size": 7936 00:18:25.619 }, 00:18:25.619 { 00:18:25.619 "name": "BaseBdev2", 00:18:25.619 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:25.619 "is_configured": true, 00:18:25.619 "data_offset": 256, 00:18:25.619 "data_size": 7936 00:18:25.619 } 00:18:25.619 ] 00:18:25.619 }' 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.619 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.879 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.880 "name": "raid_bdev1", 00:18:25.880 "uuid": "248a1244-1544-4d62-9a6c-bd2645f0f719", 00:18:25.880 "strip_size_kb": 0, 00:18:25.880 "state": "online", 00:18:25.880 "raid_level": "raid1", 00:18:25.880 "superblock": true, 00:18:25.880 "num_base_bdevs": 2, 00:18:25.880 "num_base_bdevs_discovered": 1, 00:18:25.880 "num_base_bdevs_operational": 1, 00:18:25.880 "base_bdevs_list": [ 00:18:25.880 { 00:18:25.880 "name": null, 00:18:25.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.880 "is_configured": false, 00:18:25.880 "data_offset": 0, 00:18:25.880 "data_size": 7936 00:18:25.880 }, 00:18:25.880 { 00:18:25.880 "name": "BaseBdev2", 00:18:25.880 "uuid": "9e4509ee-f7d6-570a-afc0-406c24966c8b", 00:18:25.880 "is_configured": true, 00:18:25.880 "data_offset": 256, 00:18:25.880 "data_size": 7936 00:18:25.880 } 00:18:25.880 ] 00:18:25.880 }' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86976 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86976 ']' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86976 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86976 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86976' 00:18:25.880 killing process with pid 86976 00:18:25.880 Received shutdown signal, test time was about 60.000000 seconds 00:18:25.880 00:18:25.880 Latency(us) 00:18:25.880 [2024-11-20T17:52:53.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.880 [2024-11-20T17:52:53.056Z] =================================================================================================================== 00:18:25.880 [2024-11-20T17:52:53.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86976 00:18:25.880 [2024-11-20 17:52:52.962818] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.880 [2024-11-20 17:52:52.962938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.880 [2024-11-20 17:52:52.962989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.880 [2024-11-20 17:52:52.963001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:25.880 17:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86976 00:18:26.140 [2024-11-20 17:52:53.270321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.522 17:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:27.522 00:18:27.522 real 0m19.897s 00:18:27.522 user 0m25.746s 00:18:27.522 sys 0m2.834s 00:18:27.522 17:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.522 17:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.522 ************************************ 00:18:27.522 END TEST raid_rebuild_test_sb_4k 00:18:27.522 ************************************ 00:18:27.522 17:52:54 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:27.522 17:52:54 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:27.522 17:52:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:27.522 17:52:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.522 17:52:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.522 ************************************ 00:18:27.522 START TEST raid_state_function_test_sb_md_separate 00:18:27.522 ************************************ 00:18:27.522 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:27.522 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:27.522 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:27.522 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:27.522 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:27.522 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:27.523 Process raid pid: 87662 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87662 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87662' 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87662 00:18:27.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87662 ']' 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.523 17:52:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.523 [2024-11-20 17:52:54.613848] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:27.523 [2024-11-20 17:52:54.613974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.783 [2024-11-20 17:52:54.795172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.783 [2024-11-20 17:52:54.927879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.041 [2024-11-20 17:52:55.160352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.041 [2024-11-20 17:52:55.160389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.302 [2024-11-20 17:52:55.429628] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.302 [2024-11-20 17:52:55.429690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.302 [2024-11-20 17:52:55.429700] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.302 [2024-11-20 17:52:55.429710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.302 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.562 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.562 "name": "Existed_Raid", 00:18:28.562 "uuid": "ea9b0de1-2be4-48fb-ac0e-1c5c172178ac", 00:18:28.562 "strip_size_kb": 0, 00:18:28.562 "state": "configuring", 00:18:28.562 "raid_level": "raid1", 00:18:28.562 "superblock": true, 00:18:28.562 "num_base_bdevs": 2, 00:18:28.562 "num_base_bdevs_discovered": 0, 00:18:28.562 "num_base_bdevs_operational": 2, 00:18:28.562 "base_bdevs_list": [ 00:18:28.562 { 00:18:28.562 "name": "BaseBdev1", 00:18:28.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.562 "is_configured": false, 00:18:28.562 "data_offset": 0, 00:18:28.562 "data_size": 0 00:18:28.562 }, 00:18:28.562 { 00:18:28.562 "name": "BaseBdev2", 00:18:28.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.562 "is_configured": false, 00:18:28.562 "data_offset": 0, 00:18:28.562 "data_size": 0 00:18:28.562 } 00:18:28.562 ] 00:18:28.562 }' 00:18:28.562 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.562 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.823 [2024-11-20 17:52:55.912868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.823 [2024-11-20 17:52:55.912967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.823 [2024-11-20 17:52:55.924855] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.823 [2024-11-20 17:52:55.924949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.823 [2024-11-20 17:52:55.924975] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.823 [2024-11-20 17:52:55.925000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.823 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.824 [2024-11-20 17:52:55.979668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.824 BaseBdev1 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.824 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.084 17:52:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.084 [ 00:18:29.084 { 00:18:29.084 "name": "BaseBdev1", 00:18:29.084 "aliases": [ 00:18:29.084 "1b38bbda-6653-4b7d-9a5a-f3a7376b2e97" 00:18:29.084 ], 00:18:29.084 "product_name": "Malloc disk", 00:18:29.084 "block_size": 4096, 00:18:29.084 "num_blocks": 8192, 00:18:29.084 "uuid": "1b38bbda-6653-4b7d-9a5a-f3a7376b2e97", 00:18:29.084 "md_size": 32, 00:18:29.084 "md_interleave": false, 00:18:29.084 "dif_type": 0, 00:18:29.084 "assigned_rate_limits": { 00:18:29.084 "rw_ios_per_sec": 0, 00:18:29.084 "rw_mbytes_per_sec": 0, 00:18:29.084 "r_mbytes_per_sec": 0, 00:18:29.084 "w_mbytes_per_sec": 0 00:18:29.084 }, 00:18:29.084 "claimed": true, 00:18:29.084 "claim_type": "exclusive_write", 00:18:29.084 "zoned": false, 00:18:29.084 "supported_io_types": { 00:18:29.084 "read": true, 00:18:29.084 "write": true, 00:18:29.084 "unmap": true, 00:18:29.084 "flush": true, 00:18:29.084 "reset": true, 00:18:29.084 "nvme_admin": false, 00:18:29.084 "nvme_io": false, 00:18:29.084 "nvme_io_md": false, 00:18:29.084 "write_zeroes": true, 00:18:29.084 "zcopy": true, 00:18:29.084 "get_zone_info": false, 00:18:29.084 "zone_management": false, 00:18:29.084 "zone_append": false, 00:18:29.084 "compare": false, 00:18:29.084 "compare_and_write": false, 00:18:29.084 "abort": true, 00:18:29.084 "seek_hole": false, 00:18:29.084 "seek_data": false, 00:18:29.084 "copy": true, 00:18:29.084 "nvme_iov_md": false 00:18:29.084 }, 00:18:29.084 "memory_domains": [ 00:18:29.084 { 00:18:29.084 "dma_device_id": "system", 00:18:29.084 "dma_device_type": 1 00:18:29.084 }, 00:18:29.084 { 00:18:29.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.084 "dma_device_type": 2 00:18:29.084 } 00:18:29.084 ], 00:18:29.084 "driver_specific": {} 00:18:29.084 } 00:18:29.084 ] 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.084 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.085 "name": "Existed_Raid", 00:18:29.085 "uuid": "c4b1bcda-2b2f-4bd4-b236-a680902a1f2d", 00:18:29.085 "strip_size_kb": 0, 00:18:29.085 "state": "configuring", 00:18:29.085 "raid_level": "raid1", 00:18:29.085 "superblock": true, 00:18:29.085 "num_base_bdevs": 2, 00:18:29.085 "num_base_bdevs_discovered": 1, 00:18:29.085 "num_base_bdevs_operational": 2, 00:18:29.085 "base_bdevs_list": [ 00:18:29.085 { 00:18:29.085 "name": "BaseBdev1", 00:18:29.085 "uuid": "1b38bbda-6653-4b7d-9a5a-f3a7376b2e97", 00:18:29.085 "is_configured": true, 00:18:29.085 "data_offset": 256, 00:18:29.085 "data_size": 7936 00:18:29.085 }, 00:18:29.085 { 00:18:29.085 "name": "BaseBdev2", 00:18:29.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.085 "is_configured": false, 00:18:29.085 "data_offset": 0, 00:18:29.085 "data_size": 0 00:18:29.085 } 00:18:29.085 ] 00:18:29.085 }' 00:18:29.085 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.085 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 [2024-11-20 17:52:56.470858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.345 [2024-11-20 17:52:56.470941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 [2024-11-20 17:52:56.482882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.345 [2024-11-20 17:52:56.484959] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.345 [2024-11-20 17:52:56.485049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.606 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.606 "name": "Existed_Raid", 00:18:29.606 "uuid": "fddb01d8-2728-4adf-ad78-e4e5a40ca530", 00:18:29.606 "strip_size_kb": 0, 00:18:29.606 "state": "configuring", 00:18:29.606 "raid_level": "raid1", 00:18:29.606 "superblock": true, 00:18:29.606 "num_base_bdevs": 2, 00:18:29.606 "num_base_bdevs_discovered": 1, 00:18:29.606 "num_base_bdevs_operational": 2, 00:18:29.606 "base_bdevs_list": [ 00:18:29.606 { 00:18:29.606 "name": "BaseBdev1", 00:18:29.606 "uuid": "1b38bbda-6653-4b7d-9a5a-f3a7376b2e97", 00:18:29.606 "is_configured": true, 00:18:29.606 "data_offset": 256, 00:18:29.606 "data_size": 7936 00:18:29.606 }, 00:18:29.606 { 00:18:29.606 "name": "BaseBdev2", 00:18:29.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.606 "is_configured": false, 00:18:29.606 "data_offset": 0, 00:18:29.606 "data_size": 0 00:18:29.606 } 00:18:29.606 ] 00:18:29.606 }' 00:18:29.606 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.606 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.866 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:29.866 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.866 17:52:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.866 [2024-11-20 17:52:57.034752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.866 [2024-11-20 17:52:57.035114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:29.866 [2024-11-20 17:52:57.035174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:29.866 [2024-11-20 17:52:57.035298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:29.866 [2024-11-20 17:52:57.035471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:29.866 [2024-11-20 17:52:57.035517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:29.866 [2024-11-20 17:52:57.035669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.866 BaseBdev2 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.866 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 [ 00:18:30.126 { 00:18:30.126 "name": "BaseBdev2", 00:18:30.126 "aliases": [ 00:18:30.126 "05843d45-89c3-4019-92df-6dbcc1d3a61c" 00:18:30.126 ], 00:18:30.126 "product_name": "Malloc disk", 00:18:30.126 "block_size": 4096, 00:18:30.126 "num_blocks": 8192, 00:18:30.126 "uuid": "05843d45-89c3-4019-92df-6dbcc1d3a61c", 00:18:30.126 "md_size": 32, 00:18:30.126 "md_interleave": false, 00:18:30.126 "dif_type": 0, 00:18:30.126 "assigned_rate_limits": { 00:18:30.126 "rw_ios_per_sec": 0, 00:18:30.126 "rw_mbytes_per_sec": 0, 00:18:30.126 "r_mbytes_per_sec": 0, 00:18:30.126 "w_mbytes_per_sec": 0 00:18:30.126 }, 00:18:30.126 "claimed": true, 00:18:30.126 "claim_type": "exclusive_write", 00:18:30.126 "zoned": false, 00:18:30.126 "supported_io_types": { 00:18:30.126 "read": true, 00:18:30.126 "write": true, 00:18:30.126 "unmap": true, 00:18:30.126 "flush": true, 00:18:30.126 "reset": true, 00:18:30.126 "nvme_admin": false, 00:18:30.126 "nvme_io": false, 00:18:30.126 "nvme_io_md": false, 00:18:30.126 "write_zeroes": true, 00:18:30.126 "zcopy": true, 00:18:30.126 "get_zone_info": false, 00:18:30.126 "zone_management": false, 00:18:30.126 "zone_append": false, 00:18:30.126 "compare": false, 00:18:30.126 "compare_and_write": false, 00:18:30.126 "abort": true, 00:18:30.126 "seek_hole": false, 00:18:30.126 "seek_data": false, 00:18:30.126 "copy": true, 00:18:30.126 "nvme_iov_md": false 00:18:30.126 }, 00:18:30.126 "memory_domains": [ 00:18:30.126 { 00:18:30.126 "dma_device_id": "system", 00:18:30.126 "dma_device_type": 1 00:18:30.126 }, 00:18:30.126 { 00:18:30.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.126 "dma_device_type": 2 00:18:30.126 } 00:18:30.126 ], 00:18:30.126 "driver_specific": {} 00:18:30.126 } 00:18:30.126 ] 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.126 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.126 "name": "Existed_Raid", 00:18:30.126 "uuid": "fddb01d8-2728-4adf-ad78-e4e5a40ca530", 00:18:30.126 "strip_size_kb": 0, 00:18:30.126 "state": "online", 00:18:30.126 "raid_level": "raid1", 00:18:30.126 "superblock": true, 00:18:30.126 "num_base_bdevs": 2, 00:18:30.126 "num_base_bdevs_discovered": 2, 00:18:30.126 "num_base_bdevs_operational": 2, 00:18:30.126 "base_bdevs_list": [ 00:18:30.126 { 00:18:30.127 "name": "BaseBdev1", 00:18:30.127 "uuid": "1b38bbda-6653-4b7d-9a5a-f3a7376b2e97", 00:18:30.127 "is_configured": true, 00:18:30.127 "data_offset": 256, 00:18:30.127 "data_size": 7936 00:18:30.127 }, 00:18:30.127 { 00:18:30.127 "name": "BaseBdev2", 00:18:30.127 "uuid": "05843d45-89c3-4019-92df-6dbcc1d3a61c", 00:18:30.127 "is_configured": true, 00:18:30.127 "data_offset": 256, 00:18:30.127 "data_size": 7936 00:18:30.127 } 00:18:30.127 ] 00:18:30.127 }' 00:18:30.127 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.127 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.387 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.387 [2024-11-20 17:52:57.554176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.647 "name": "Existed_Raid", 00:18:30.647 "aliases": [ 00:18:30.647 "fddb01d8-2728-4adf-ad78-e4e5a40ca530" 00:18:30.647 ], 00:18:30.647 "product_name": "Raid Volume", 00:18:30.647 "block_size": 4096, 00:18:30.647 "num_blocks": 7936, 00:18:30.647 "uuid": "fddb01d8-2728-4adf-ad78-e4e5a40ca530", 00:18:30.647 "md_size": 32, 00:18:30.647 "md_interleave": false, 00:18:30.647 "dif_type": 0, 00:18:30.647 "assigned_rate_limits": { 00:18:30.647 "rw_ios_per_sec": 0, 00:18:30.647 "rw_mbytes_per_sec": 0, 00:18:30.647 "r_mbytes_per_sec": 0, 00:18:30.647 "w_mbytes_per_sec": 0 00:18:30.647 }, 00:18:30.647 "claimed": false, 00:18:30.647 "zoned": false, 00:18:30.647 "supported_io_types": { 00:18:30.647 "read": true, 00:18:30.647 "write": true, 00:18:30.647 "unmap": false, 00:18:30.647 "flush": false, 00:18:30.647 "reset": true, 00:18:30.647 "nvme_admin": false, 00:18:30.647 "nvme_io": false, 00:18:30.647 "nvme_io_md": false, 00:18:30.647 "write_zeroes": true, 00:18:30.647 "zcopy": false, 00:18:30.647 "get_zone_info": false, 00:18:30.647 "zone_management": false, 00:18:30.647 "zone_append": false, 00:18:30.647 "compare": false, 00:18:30.647 "compare_and_write": false, 00:18:30.647 "abort": false, 00:18:30.647 "seek_hole": false, 00:18:30.647 "seek_data": false, 00:18:30.647 "copy": false, 00:18:30.647 "nvme_iov_md": false 00:18:30.647 }, 00:18:30.647 "memory_domains": [ 00:18:30.647 { 00:18:30.647 "dma_device_id": "system", 00:18:30.647 "dma_device_type": 1 00:18:30.647 }, 00:18:30.647 { 00:18:30.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.647 "dma_device_type": 2 00:18:30.647 }, 00:18:30.647 { 00:18:30.647 "dma_device_id": "system", 00:18:30.647 "dma_device_type": 1 00:18:30.647 }, 00:18:30.647 { 00:18:30.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.647 "dma_device_type": 2 00:18:30.647 } 00:18:30.647 ], 00:18:30.647 "driver_specific": { 00:18:30.647 "raid": { 00:18:30.647 "uuid": "fddb01d8-2728-4adf-ad78-e4e5a40ca530", 00:18:30.647 "strip_size_kb": 0, 00:18:30.647 "state": "online", 00:18:30.647 "raid_level": "raid1", 00:18:30.647 "superblock": true, 00:18:30.647 "num_base_bdevs": 2, 00:18:30.647 "num_base_bdevs_discovered": 2, 00:18:30.647 "num_base_bdevs_operational": 2, 00:18:30.647 "base_bdevs_list": [ 00:18:30.647 { 00:18:30.647 "name": "BaseBdev1", 00:18:30.647 "uuid": "1b38bbda-6653-4b7d-9a5a-f3a7376b2e97", 00:18:30.647 "is_configured": true, 00:18:30.647 "data_offset": 256, 00:18:30.647 "data_size": 7936 00:18:30.647 }, 00:18:30.647 { 00:18:30.647 "name": "BaseBdev2", 00:18:30.647 "uuid": "05843d45-89c3-4019-92df-6dbcc1d3a61c", 00:18:30.647 "is_configured": true, 00:18:30.647 "data_offset": 256, 00:18:30.647 "data_size": 7936 00:18:30.647 } 00:18:30.647 ] 00:18:30.647 } 00:18:30.647 } 00:18:30.647 }' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:30.647 BaseBdev2' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:30.647 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:30.648 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.648 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.648 [2024-11-20 17:52:57.817460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.907 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.907 "name": "Existed_Raid", 00:18:30.907 "uuid": "fddb01d8-2728-4adf-ad78-e4e5a40ca530", 00:18:30.907 "strip_size_kb": 0, 00:18:30.907 "state": "online", 00:18:30.907 "raid_level": "raid1", 00:18:30.907 "superblock": true, 00:18:30.907 "num_base_bdevs": 2, 00:18:30.907 "num_base_bdevs_discovered": 1, 00:18:30.907 "num_base_bdevs_operational": 1, 00:18:30.907 "base_bdevs_list": [ 00:18:30.907 { 00:18:30.907 "name": null, 00:18:30.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.907 "is_configured": false, 00:18:30.907 "data_offset": 0, 00:18:30.907 "data_size": 7936 00:18:30.907 }, 00:18:30.907 { 00:18:30.907 "name": "BaseBdev2", 00:18:30.907 "uuid": "05843d45-89c3-4019-92df-6dbcc1d3a61c", 00:18:30.907 "is_configured": true, 00:18:30.907 "data_offset": 256, 00:18:30.907 "data_size": 7936 00:18:30.908 } 00:18:30.908 ] 00:18:30.908 }' 00:18:30.908 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.908 17:52:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.503 [2024-11-20 17:52:58.435703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.503 [2024-11-20 17:52:58.435825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.503 [2024-11-20 17:52:58.545498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.503 [2024-11-20 17:52:58.545552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.503 [2024-11-20 17:52:58.545565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87662 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87662 ']' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87662 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87662 00:18:31.503 killing process with pid 87662 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87662' 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87662 00:18:31.503 [2024-11-20 17:52:58.644755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.503 17:52:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87662 00:18:31.503 [2024-11-20 17:52:58.661503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.901 17:52:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:32.901 00:18:32.901 real 0m5.318s 00:18:32.901 user 0m7.513s 00:18:32.901 sys 0m1.007s 00:18:32.901 17:52:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.901 17:52:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.901 ************************************ 00:18:32.901 END TEST raid_state_function_test_sb_md_separate 00:18:32.901 ************************************ 00:18:32.901 17:52:59 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:32.901 17:52:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:32.901 17:52:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.901 17:52:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.901 ************************************ 00:18:32.901 START TEST raid_superblock_test_md_separate 00:18:32.901 ************************************ 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:32.901 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87914 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87914 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87914 ']' 00:18:32.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.902 17:52:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.902 [2024-11-20 17:52:59.996399] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:32.902 [2024-11-20 17:52:59.996576] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87914 ] 00:18:33.162 [2024-11-20 17:53:00.168340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.162 [2024-11-20 17:53:00.298626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.422 [2024-11-20 17:53:00.517912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.422 [2024-11-20 17:53:00.518097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.682 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.943 malloc1 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.943 [2024-11-20 17:53:00.866706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.943 [2024-11-20 17:53:00.866769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.943 [2024-11-20 17:53:00.866808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.943 [2024-11-20 17:53:00.866818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.943 [2024-11-20 17:53:00.869049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.943 [2024-11-20 17:53:00.869145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.943 pt1 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.943 malloc2 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.943 [2024-11-20 17:53:00.928755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.943 [2024-11-20 17:53:00.928883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.943 [2024-11-20 17:53:00.928924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:33.943 [2024-11-20 17:53:00.928952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.943 [2024-11-20 17:53:00.931220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.943 [2024-11-20 17:53:00.931293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.943 pt2 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.943 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.943 [2024-11-20 17:53:00.940761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.943 [2024-11-20 17:53:00.942887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.943 [2024-11-20 17:53:00.943138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:33.943 [2024-11-20 17:53:00.943187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.943 [2024-11-20 17:53:00.943296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:33.943 [2024-11-20 17:53:00.943467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:33.943 [2024-11-20 17:53:00.943509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:33.943 [2024-11-20 17:53:00.943646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.944 "name": "raid_bdev1", 00:18:33.944 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:33.944 "strip_size_kb": 0, 00:18:33.944 "state": "online", 00:18:33.944 "raid_level": "raid1", 00:18:33.944 "superblock": true, 00:18:33.944 "num_base_bdevs": 2, 00:18:33.944 "num_base_bdevs_discovered": 2, 00:18:33.944 "num_base_bdevs_operational": 2, 00:18:33.944 "base_bdevs_list": [ 00:18:33.944 { 00:18:33.944 "name": "pt1", 00:18:33.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.944 "is_configured": true, 00:18:33.944 "data_offset": 256, 00:18:33.944 "data_size": 7936 00:18:33.944 }, 00:18:33.944 { 00:18:33.944 "name": "pt2", 00:18:33.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.944 "is_configured": true, 00:18:33.944 "data_offset": 256, 00:18:33.944 "data_size": 7936 00:18:33.944 } 00:18:33.944 ] 00:18:33.944 }' 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.944 17:53:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:34.514 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:34.515 [2024-11-20 17:53:01.388251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:34.515 "name": "raid_bdev1", 00:18:34.515 "aliases": [ 00:18:34.515 "a55e2fdf-9be1-4f18-ab00-080684357809" 00:18:34.515 ], 00:18:34.515 "product_name": "Raid Volume", 00:18:34.515 "block_size": 4096, 00:18:34.515 "num_blocks": 7936, 00:18:34.515 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:34.515 "md_size": 32, 00:18:34.515 "md_interleave": false, 00:18:34.515 "dif_type": 0, 00:18:34.515 "assigned_rate_limits": { 00:18:34.515 "rw_ios_per_sec": 0, 00:18:34.515 "rw_mbytes_per_sec": 0, 00:18:34.515 "r_mbytes_per_sec": 0, 00:18:34.515 "w_mbytes_per_sec": 0 00:18:34.515 }, 00:18:34.515 "claimed": false, 00:18:34.515 "zoned": false, 00:18:34.515 "supported_io_types": { 00:18:34.515 "read": true, 00:18:34.515 "write": true, 00:18:34.515 "unmap": false, 00:18:34.515 "flush": false, 00:18:34.515 "reset": true, 00:18:34.515 "nvme_admin": false, 00:18:34.515 "nvme_io": false, 00:18:34.515 "nvme_io_md": false, 00:18:34.515 "write_zeroes": true, 00:18:34.515 "zcopy": false, 00:18:34.515 "get_zone_info": false, 00:18:34.515 "zone_management": false, 00:18:34.515 "zone_append": false, 00:18:34.515 "compare": false, 00:18:34.515 "compare_and_write": false, 00:18:34.515 "abort": false, 00:18:34.515 "seek_hole": false, 00:18:34.515 "seek_data": false, 00:18:34.515 "copy": false, 00:18:34.515 "nvme_iov_md": false 00:18:34.515 }, 00:18:34.515 "memory_domains": [ 00:18:34.515 { 00:18:34.515 "dma_device_id": "system", 00:18:34.515 "dma_device_type": 1 00:18:34.515 }, 00:18:34.515 { 00:18:34.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.515 "dma_device_type": 2 00:18:34.515 }, 00:18:34.515 { 00:18:34.515 "dma_device_id": "system", 00:18:34.515 "dma_device_type": 1 00:18:34.515 }, 00:18:34.515 { 00:18:34.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.515 "dma_device_type": 2 00:18:34.515 } 00:18:34.515 ], 00:18:34.515 "driver_specific": { 00:18:34.515 "raid": { 00:18:34.515 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:34.515 "strip_size_kb": 0, 00:18:34.515 "state": "online", 00:18:34.515 "raid_level": "raid1", 00:18:34.515 "superblock": true, 00:18:34.515 "num_base_bdevs": 2, 00:18:34.515 "num_base_bdevs_discovered": 2, 00:18:34.515 "num_base_bdevs_operational": 2, 00:18:34.515 "base_bdevs_list": [ 00:18:34.515 { 00:18:34.515 "name": "pt1", 00:18:34.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.515 "is_configured": true, 00:18:34.515 "data_offset": 256, 00:18:34.515 "data_size": 7936 00:18:34.515 }, 00:18:34.515 { 00:18:34.515 "name": "pt2", 00:18:34.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.515 "is_configured": true, 00:18:34.515 "data_offset": 256, 00:18:34.515 "data_size": 7936 00:18:34.515 } 00:18:34.515 ] 00:18:34.515 } 00:18:34.515 } 00:18:34.515 }' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:34.515 pt2' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:34.515 [2024-11-20 17:53:01.623778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a55e2fdf-9be1-4f18-ab00-080684357809 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a55e2fdf-9be1-4f18-ab00-080684357809 ']' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 [2024-11-20 17:53:01.675448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.515 [2024-11-20 17:53:01.675469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.515 [2024-11-20 17:53:01.675546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.515 [2024-11-20 17:53:01.675597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.515 [2024-11-20 17:53:01.675609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.776 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.776 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 [2024-11-20 17:53:01.835180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:34.777 [2024-11-20 17:53:01.837238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:34.777 [2024-11-20 17:53:01.837359] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:34.777 [2024-11-20 17:53:01.837407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:34.777 [2024-11-20 17:53:01.837421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.777 [2024-11-20 17:53:01.837431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:34.777 request: 00:18:34.777 { 00:18:34.777 "name": "raid_bdev1", 00:18:34.777 "raid_level": "raid1", 00:18:34.777 "base_bdevs": [ 00:18:34.777 "malloc1", 00:18:34.777 "malloc2" 00:18:34.777 ], 00:18:34.777 "superblock": false, 00:18:34.777 "method": "bdev_raid_create", 00:18:34.777 "req_id": 1 00:18:34.777 } 00:18:34.777 Got JSON-RPC error response 00:18:34.777 response: 00:18:34.777 { 00:18:34.777 "code": -17, 00:18:34.777 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:34.777 } 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 [2024-11-20 17:53:01.899077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.777 [2024-11-20 17:53:01.899164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.777 [2024-11-20 17:53:01.899193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:34.777 [2024-11-20 17:53:01.899221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.777 [2024-11-20 17:53:01.901369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.777 [2024-11-20 17:53:01.901441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.777 [2024-11-20 17:53:01.901500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:34.777 [2024-11-20 17:53:01.901584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.777 pt1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.038 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.038 "name": "raid_bdev1", 00:18:35.038 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:35.038 "strip_size_kb": 0, 00:18:35.038 "state": "configuring", 00:18:35.038 "raid_level": "raid1", 00:18:35.038 "superblock": true, 00:18:35.038 "num_base_bdevs": 2, 00:18:35.038 "num_base_bdevs_discovered": 1, 00:18:35.038 "num_base_bdevs_operational": 2, 00:18:35.038 "base_bdevs_list": [ 00:18:35.038 { 00:18:35.038 "name": "pt1", 00:18:35.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.038 "is_configured": true, 00:18:35.038 "data_offset": 256, 00:18:35.038 "data_size": 7936 00:18:35.038 }, 00:18:35.038 { 00:18:35.038 "name": null, 00:18:35.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.038 "is_configured": false, 00:18:35.038 "data_offset": 256, 00:18:35.038 "data_size": 7936 00:18:35.038 } 00:18:35.038 ] 00:18:35.038 }' 00:18:35.038 17:53:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.038 17:53:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 [2024-11-20 17:53:02.386197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.299 [2024-11-20 17:53:02.386253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.299 [2024-11-20 17:53:02.386269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:35.299 [2024-11-20 17:53:02.386279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.299 [2024-11-20 17:53:02.386437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.299 [2024-11-20 17:53:02.386452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.299 [2024-11-20 17:53:02.386487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:35.299 [2024-11-20 17:53:02.386505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.299 [2024-11-20 17:53:02.386593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:35.299 [2024-11-20 17:53:02.386604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:35.299 [2024-11-20 17:53:02.386672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:35.299 [2024-11-20 17:53:02.386773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:35.299 [2024-11-20 17:53:02.386780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:35.299 [2024-11-20 17:53:02.386871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.299 pt2 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.299 "name": "raid_bdev1", 00:18:35.299 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:35.299 "strip_size_kb": 0, 00:18:35.299 "state": "online", 00:18:35.299 "raid_level": "raid1", 00:18:35.299 "superblock": true, 00:18:35.299 "num_base_bdevs": 2, 00:18:35.299 "num_base_bdevs_discovered": 2, 00:18:35.299 "num_base_bdevs_operational": 2, 00:18:35.299 "base_bdevs_list": [ 00:18:35.299 { 00:18:35.299 "name": "pt1", 00:18:35.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.299 "is_configured": true, 00:18:35.299 "data_offset": 256, 00:18:35.299 "data_size": 7936 00:18:35.299 }, 00:18:35.299 { 00:18:35.299 "name": "pt2", 00:18:35.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.299 "is_configured": true, 00:18:35.299 "data_offset": 256, 00:18:35.299 "data_size": 7936 00:18:35.299 } 00:18:35.299 ] 00:18:35.299 }' 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.299 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.869 [2024-11-20 17:53:02.869616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.869 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.869 "name": "raid_bdev1", 00:18:35.869 "aliases": [ 00:18:35.869 "a55e2fdf-9be1-4f18-ab00-080684357809" 00:18:35.869 ], 00:18:35.870 "product_name": "Raid Volume", 00:18:35.870 "block_size": 4096, 00:18:35.870 "num_blocks": 7936, 00:18:35.870 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:35.870 "md_size": 32, 00:18:35.870 "md_interleave": false, 00:18:35.870 "dif_type": 0, 00:18:35.870 "assigned_rate_limits": { 00:18:35.870 "rw_ios_per_sec": 0, 00:18:35.870 "rw_mbytes_per_sec": 0, 00:18:35.870 "r_mbytes_per_sec": 0, 00:18:35.870 "w_mbytes_per_sec": 0 00:18:35.870 }, 00:18:35.870 "claimed": false, 00:18:35.870 "zoned": false, 00:18:35.870 "supported_io_types": { 00:18:35.870 "read": true, 00:18:35.870 "write": true, 00:18:35.870 "unmap": false, 00:18:35.870 "flush": false, 00:18:35.870 "reset": true, 00:18:35.870 "nvme_admin": false, 00:18:35.870 "nvme_io": false, 00:18:35.870 "nvme_io_md": false, 00:18:35.870 "write_zeroes": true, 00:18:35.870 "zcopy": false, 00:18:35.870 "get_zone_info": false, 00:18:35.870 "zone_management": false, 00:18:35.870 "zone_append": false, 00:18:35.870 "compare": false, 00:18:35.870 "compare_and_write": false, 00:18:35.870 "abort": false, 00:18:35.870 "seek_hole": false, 00:18:35.870 "seek_data": false, 00:18:35.870 "copy": false, 00:18:35.870 "nvme_iov_md": false 00:18:35.870 }, 00:18:35.870 "memory_domains": [ 00:18:35.870 { 00:18:35.870 "dma_device_id": "system", 00:18:35.870 "dma_device_type": 1 00:18:35.870 }, 00:18:35.870 { 00:18:35.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.870 "dma_device_type": 2 00:18:35.870 }, 00:18:35.870 { 00:18:35.870 "dma_device_id": "system", 00:18:35.870 "dma_device_type": 1 00:18:35.870 }, 00:18:35.870 { 00:18:35.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.870 "dma_device_type": 2 00:18:35.870 } 00:18:35.870 ], 00:18:35.870 "driver_specific": { 00:18:35.870 "raid": { 00:18:35.870 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:35.870 "strip_size_kb": 0, 00:18:35.870 "state": "online", 00:18:35.870 "raid_level": "raid1", 00:18:35.870 "superblock": true, 00:18:35.870 "num_base_bdevs": 2, 00:18:35.870 "num_base_bdevs_discovered": 2, 00:18:35.870 "num_base_bdevs_operational": 2, 00:18:35.870 "base_bdevs_list": [ 00:18:35.870 { 00:18:35.870 "name": "pt1", 00:18:35.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.870 "is_configured": true, 00:18:35.870 "data_offset": 256, 00:18:35.870 "data_size": 7936 00:18:35.870 }, 00:18:35.870 { 00:18:35.870 "name": "pt2", 00:18:35.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.870 "is_configured": true, 00:18:35.870 "data_offset": 256, 00:18:35.870 "data_size": 7936 00:18:35.870 } 00:18:35.870 ] 00:18:35.870 } 00:18:35.870 } 00:18:35.870 }' 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:35.870 pt2' 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.870 17:53:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.870 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:36.131 [2024-11-20 17:53:03.093267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a55e2fdf-9be1-4f18-ab00-080684357809 '!=' a55e2fdf-9be1-4f18-ab00-080684357809 ']' 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.131 [2024-11-20 17:53:03.141004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.131 "name": "raid_bdev1", 00:18:36.131 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:36.131 "strip_size_kb": 0, 00:18:36.131 "state": "online", 00:18:36.131 "raid_level": "raid1", 00:18:36.131 "superblock": true, 00:18:36.131 "num_base_bdevs": 2, 00:18:36.131 "num_base_bdevs_discovered": 1, 00:18:36.131 "num_base_bdevs_operational": 1, 00:18:36.131 "base_bdevs_list": [ 00:18:36.131 { 00:18:36.131 "name": null, 00:18:36.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.131 "is_configured": false, 00:18:36.131 "data_offset": 0, 00:18:36.131 "data_size": 7936 00:18:36.131 }, 00:18:36.131 { 00:18:36.131 "name": "pt2", 00:18:36.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.131 "is_configured": true, 00:18:36.131 "data_offset": 256, 00:18:36.131 "data_size": 7936 00:18:36.131 } 00:18:36.131 ] 00:18:36.131 }' 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.131 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 [2024-11-20 17:53:03.608404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.702 [2024-11-20 17:53:03.608465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.702 [2024-11-20 17:53:03.608536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.702 [2024-11-20 17:53:03.608588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.702 [2024-11-20 17:53:03.608648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:36.702 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.703 [2024-11-20 17:53:03.684294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.703 [2024-11-20 17:53:03.684390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.703 [2024-11-20 17:53:03.684419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:36.703 [2024-11-20 17:53:03.684447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.703 [2024-11-20 17:53:03.686673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.703 [2024-11-20 17:53:03.686744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.703 [2024-11-20 17:53:03.686805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:36.703 [2024-11-20 17:53:03.686884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.703 [2024-11-20 17:53:03.686990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:36.703 [2024-11-20 17:53:03.687045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.703 [2024-11-20 17:53:03.687137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:36.703 [2024-11-20 17:53:03.687292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:36.703 [2024-11-20 17:53:03.687328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:36.703 [2024-11-20 17:53:03.687461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.703 pt2 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.703 "name": "raid_bdev1", 00:18:36.703 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:36.703 "strip_size_kb": 0, 00:18:36.703 "state": "online", 00:18:36.703 "raid_level": "raid1", 00:18:36.703 "superblock": true, 00:18:36.703 "num_base_bdevs": 2, 00:18:36.703 "num_base_bdevs_discovered": 1, 00:18:36.703 "num_base_bdevs_operational": 1, 00:18:36.703 "base_bdevs_list": [ 00:18:36.703 { 00:18:36.703 "name": null, 00:18:36.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.703 "is_configured": false, 00:18:36.703 "data_offset": 256, 00:18:36.703 "data_size": 7936 00:18:36.703 }, 00:18:36.703 { 00:18:36.703 "name": "pt2", 00:18:36.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.703 "is_configured": true, 00:18:36.703 "data_offset": 256, 00:18:36.703 "data_size": 7936 00:18:36.703 } 00:18:36.703 ] 00:18:36.703 }' 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.703 17:53:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.963 [2024-11-20 17:53:04.063589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.963 [2024-11-20 17:53:04.063652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.963 [2024-11-20 17:53:04.063713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.963 [2024-11-20 17:53:04.063763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.963 [2024-11-20 17:53:04.063809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.963 [2024-11-20 17:53:04.123526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.963 [2024-11-20 17:53:04.123570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.963 [2024-11-20 17:53:04.123585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:36.963 [2024-11-20 17:53:04.123593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.963 [2024-11-20 17:53:04.125646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.963 [2024-11-20 17:53:04.125681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.963 [2024-11-20 17:53:04.125724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:36.963 [2024-11-20 17:53:04.125772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.963 [2024-11-20 17:53:04.125885] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:36.963 [2024-11-20 17:53:04.125894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.963 [2024-11-20 17:53:04.125909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:36.963 [2024-11-20 17:53:04.125968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.963 [2024-11-20 17:53:04.126044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:36.963 [2024-11-20 17:53:04.126052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.963 [2024-11-20 17:53:04.126104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:36.963 [2024-11-20 17:53:04.126206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:36.963 [2024-11-20 17:53:04.126291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:36.963 [2024-11-20 17:53:04.126390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.963 pt1 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.963 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.224 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.224 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.224 "name": "raid_bdev1", 00:18:37.224 "uuid": "a55e2fdf-9be1-4f18-ab00-080684357809", 00:18:37.224 "strip_size_kb": 0, 00:18:37.224 "state": "online", 00:18:37.224 "raid_level": "raid1", 00:18:37.224 "superblock": true, 00:18:37.224 "num_base_bdevs": 2, 00:18:37.224 "num_base_bdevs_discovered": 1, 00:18:37.224 "num_base_bdevs_operational": 1, 00:18:37.224 "base_bdevs_list": [ 00:18:37.224 { 00:18:37.224 "name": null, 00:18:37.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.224 "is_configured": false, 00:18:37.224 "data_offset": 256, 00:18:37.224 "data_size": 7936 00:18:37.224 }, 00:18:37.224 { 00:18:37.224 "name": "pt2", 00:18:37.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.224 "is_configured": true, 00:18:37.224 "data_offset": 256, 00:18:37.224 "data_size": 7936 00:18:37.224 } 00:18:37.224 ] 00:18:37.224 }' 00:18:37.224 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.224 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.484 [2024-11-20 17:53:04.594944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a55e2fdf-9be1-4f18-ab00-080684357809 '!=' a55e2fdf-9be1-4f18-ab00-080684357809 ']' 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87914 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87914 ']' 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87914 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.484 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87914 00:18:37.744 killing process with pid 87914 00:18:37.744 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.744 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.744 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87914' 00:18:37.744 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87914 00:18:37.744 [2024-11-20 17:53:04.678061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.744 [2024-11-20 17:53:04.678121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.744 [2024-11-20 17:53:04.678165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.744 [2024-11-20 17:53:04.678182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:37.744 17:53:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87914 00:18:37.744 [2024-11-20 17:53:04.899532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.126 17:53:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:39.126 00:18:39.126 real 0m6.164s 00:18:39.126 user 0m9.181s 00:18:39.126 sys 0m1.204s 00:18:39.126 17:53:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.126 17:53:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 ************************************ 00:18:39.126 END TEST raid_superblock_test_md_separate 00:18:39.126 ************************************ 00:18:39.126 17:53:06 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:39.126 17:53:06 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:39.126 17:53:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:39.126 17:53:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.126 17:53:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 ************************************ 00:18:39.126 START TEST raid_rebuild_test_sb_md_separate 00:18:39.126 ************************************ 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88237 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88237 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88237 ']' 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.126 17:53:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 [2024-11-20 17:53:06.266092] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:39.127 [2024-11-20 17:53:06.266299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88237 ] 00:18:39.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:39.127 Zero copy mechanism will not be used. 00:18:39.387 [2024-11-20 17:53:06.445513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.647 [2024-11-20 17:53:06.576869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.647 [2024-11-20 17:53:06.811508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.647 [2024-11-20 17:53:06.811657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.907 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.907 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:39.907 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.907 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:39.907 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.907 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.167 BaseBdev1_malloc 00:18:40.167 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.167 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:40.167 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 [2024-11-20 17:53:07.129653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:40.168 [2024-11-20 17:53:07.129777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.168 [2024-11-20 17:53:07.129808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:40.168 [2024-11-20 17:53:07.129821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.168 [2024-11-20 17:53:07.132040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.168 [2024-11-20 17:53:07.132076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:40.168 BaseBdev1 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 BaseBdev2_malloc 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 [2024-11-20 17:53:07.192107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:40.168 [2024-11-20 17:53:07.192179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.168 [2024-11-20 17:53:07.192199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:40.168 [2024-11-20 17:53:07.192212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.168 [2024-11-20 17:53:07.194379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.168 [2024-11-20 17:53:07.194415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:40.168 BaseBdev2 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 spare_malloc 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 spare_delay 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 [2024-11-20 17:53:07.293129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.168 [2024-11-20 17:53:07.293247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.168 [2024-11-20 17:53:07.293274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:40.168 [2024-11-20 17:53:07.293286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.168 [2024-11-20 17:53:07.295462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.168 [2024-11-20 17:53:07.295503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.168 spare 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 [2024-11-20 17:53:07.305171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.168 [2024-11-20 17:53:07.307234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.168 [2024-11-20 17:53:07.307415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:40.168 [2024-11-20 17:53:07.307431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:40.168 [2024-11-20 17:53:07.307513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:40.168 [2024-11-20 17:53:07.307645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:40.168 [2024-11-20 17:53:07.307655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:40.168 [2024-11-20 17:53:07.307749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.168 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.428 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.428 "name": "raid_bdev1", 00:18:40.428 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:40.428 "strip_size_kb": 0, 00:18:40.428 "state": "online", 00:18:40.428 "raid_level": "raid1", 00:18:40.428 "superblock": true, 00:18:40.428 "num_base_bdevs": 2, 00:18:40.428 "num_base_bdevs_discovered": 2, 00:18:40.428 "num_base_bdevs_operational": 2, 00:18:40.428 "base_bdevs_list": [ 00:18:40.428 { 00:18:40.428 "name": "BaseBdev1", 00:18:40.428 "uuid": "90a80de5-7f7d-5cce-914f-1ae7d53dec42", 00:18:40.428 "is_configured": true, 00:18:40.428 "data_offset": 256, 00:18:40.428 "data_size": 7936 00:18:40.428 }, 00:18:40.428 { 00:18:40.428 "name": "BaseBdev2", 00:18:40.428 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:40.428 "is_configured": true, 00:18:40.428 "data_offset": 256, 00:18:40.428 "data_size": 7936 00:18:40.428 } 00:18:40.428 ] 00:18:40.428 }' 00:18:40.428 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.428 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.687 [2024-11-20 17:53:07.780720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:40.687 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.947 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.947 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.947 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:40.947 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.947 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.947 17:53:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:40.947 [2024-11-20 17:53:08.032121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.947 /dev/nbd0 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.947 1+0 records in 00:18:40.947 1+0 records out 00:18:40.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352659 s, 11.6 MB/s 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:40.947 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:41.886 7936+0 records in 00:18:41.886 7936+0 records out 00:18:41.886 32505856 bytes (33 MB, 31 MiB) copied, 0.591911 s, 54.9 MB/s 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.886 [2024-11-20 17:53:08.913162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.886 [2024-11-20 17:53:08.947295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.886 17:53:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.886 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.886 "name": "raid_bdev1", 00:18:41.886 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:41.886 "strip_size_kb": 0, 00:18:41.886 "state": "online", 00:18:41.886 "raid_level": "raid1", 00:18:41.886 "superblock": true, 00:18:41.886 "num_base_bdevs": 2, 00:18:41.886 "num_base_bdevs_discovered": 1, 00:18:41.886 "num_base_bdevs_operational": 1, 00:18:41.886 "base_bdevs_list": [ 00:18:41.886 { 00:18:41.886 "name": null, 00:18:41.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.886 "is_configured": false, 00:18:41.886 "data_offset": 0, 00:18:41.886 "data_size": 7936 00:18:41.886 }, 00:18:41.886 { 00:18:41.886 "name": "BaseBdev2", 00:18:41.886 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:41.886 "is_configured": true, 00:18:41.886 "data_offset": 256, 00:18:41.886 "data_size": 7936 00:18:41.886 } 00:18:41.886 ] 00:18:41.886 }' 00:18:41.886 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.886 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.457 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.457 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.457 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.457 [2024-11-20 17:53:09.422462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.457 [2024-11-20 17:53:09.435241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:42.457 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.457 17:53:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:42.457 [2024-11-20 17:53:09.437311] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.397 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.397 "name": "raid_bdev1", 00:18:43.397 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:43.397 "strip_size_kb": 0, 00:18:43.397 "state": "online", 00:18:43.397 "raid_level": "raid1", 00:18:43.397 "superblock": true, 00:18:43.397 "num_base_bdevs": 2, 00:18:43.397 "num_base_bdevs_discovered": 2, 00:18:43.397 "num_base_bdevs_operational": 2, 00:18:43.397 "process": { 00:18:43.397 "type": "rebuild", 00:18:43.397 "target": "spare", 00:18:43.397 "progress": { 00:18:43.397 "blocks": 2560, 00:18:43.397 "percent": 32 00:18:43.397 } 00:18:43.397 }, 00:18:43.397 "base_bdevs_list": [ 00:18:43.397 { 00:18:43.398 "name": "spare", 00:18:43.398 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:43.398 "is_configured": true, 00:18:43.398 "data_offset": 256, 00:18:43.398 "data_size": 7936 00:18:43.398 }, 00:18:43.398 { 00:18:43.398 "name": "BaseBdev2", 00:18:43.398 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:43.398 "is_configured": true, 00:18:43.398 "data_offset": 256, 00:18:43.398 "data_size": 7936 00:18:43.398 } 00:18:43.398 ] 00:18:43.398 }' 00:18:43.398 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.398 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.398 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.658 [2024-11-20 17:53:10.597260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.658 [2024-11-20 17:53:10.646052] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.658 [2024-11-20 17:53:10.646106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.658 [2024-11-20 17:53:10.646120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.658 [2024-11-20 17:53:10.646133] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.658 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.658 "name": "raid_bdev1", 00:18:43.659 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:43.659 "strip_size_kb": 0, 00:18:43.659 "state": "online", 00:18:43.659 "raid_level": "raid1", 00:18:43.659 "superblock": true, 00:18:43.659 "num_base_bdevs": 2, 00:18:43.659 "num_base_bdevs_discovered": 1, 00:18:43.659 "num_base_bdevs_operational": 1, 00:18:43.659 "base_bdevs_list": [ 00:18:43.659 { 00:18:43.659 "name": null, 00:18:43.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.659 "is_configured": false, 00:18:43.659 "data_offset": 0, 00:18:43.659 "data_size": 7936 00:18:43.659 }, 00:18:43.659 { 00:18:43.659 "name": "BaseBdev2", 00:18:43.659 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:43.659 "is_configured": true, 00:18:43.659 "data_offset": 256, 00:18:43.659 "data_size": 7936 00:18:43.659 } 00:18:43.659 ] 00:18:43.659 }' 00:18:43.659 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.659 17:53:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.229 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.229 "name": "raid_bdev1", 00:18:44.229 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:44.229 "strip_size_kb": 0, 00:18:44.230 "state": "online", 00:18:44.230 "raid_level": "raid1", 00:18:44.230 "superblock": true, 00:18:44.230 "num_base_bdevs": 2, 00:18:44.230 "num_base_bdevs_discovered": 1, 00:18:44.230 "num_base_bdevs_operational": 1, 00:18:44.230 "base_bdevs_list": [ 00:18:44.230 { 00:18:44.230 "name": null, 00:18:44.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.230 "is_configured": false, 00:18:44.230 "data_offset": 0, 00:18:44.230 "data_size": 7936 00:18:44.230 }, 00:18:44.230 { 00:18:44.230 "name": "BaseBdev2", 00:18:44.230 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:44.230 "is_configured": true, 00:18:44.230 "data_offset": 256, 00:18:44.230 "data_size": 7936 00:18:44.230 } 00:18:44.230 ] 00:18:44.230 }' 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.230 [2024-11-20 17:53:11.225308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.230 [2024-11-20 17:53:11.238503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.230 17:53:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:44.230 [2024-11-20 17:53:11.240607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.171 "name": "raid_bdev1", 00:18:45.171 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:45.171 "strip_size_kb": 0, 00:18:45.171 "state": "online", 00:18:45.171 "raid_level": "raid1", 00:18:45.171 "superblock": true, 00:18:45.171 "num_base_bdevs": 2, 00:18:45.171 "num_base_bdevs_discovered": 2, 00:18:45.171 "num_base_bdevs_operational": 2, 00:18:45.171 "process": { 00:18:45.171 "type": "rebuild", 00:18:45.171 "target": "spare", 00:18:45.171 "progress": { 00:18:45.171 "blocks": 2560, 00:18:45.171 "percent": 32 00:18:45.171 } 00:18:45.171 }, 00:18:45.171 "base_bdevs_list": [ 00:18:45.171 { 00:18:45.171 "name": "spare", 00:18:45.171 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:45.171 "is_configured": true, 00:18:45.171 "data_offset": 256, 00:18:45.171 "data_size": 7936 00:18:45.171 }, 00:18:45.171 { 00:18:45.171 "name": "BaseBdev2", 00:18:45.171 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:45.171 "is_configured": true, 00:18:45.171 "data_offset": 256, 00:18:45.171 "data_size": 7936 00:18:45.171 } 00:18:45.171 ] 00:18:45.171 }' 00:18:45.171 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:45.431 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:45.431 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=721 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.432 "name": "raid_bdev1", 00:18:45.432 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:45.432 "strip_size_kb": 0, 00:18:45.432 "state": "online", 00:18:45.432 "raid_level": "raid1", 00:18:45.432 "superblock": true, 00:18:45.432 "num_base_bdevs": 2, 00:18:45.432 "num_base_bdevs_discovered": 2, 00:18:45.432 "num_base_bdevs_operational": 2, 00:18:45.432 "process": { 00:18:45.432 "type": "rebuild", 00:18:45.432 "target": "spare", 00:18:45.432 "progress": { 00:18:45.432 "blocks": 2816, 00:18:45.432 "percent": 35 00:18:45.432 } 00:18:45.432 }, 00:18:45.432 "base_bdevs_list": [ 00:18:45.432 { 00:18:45.432 "name": "spare", 00:18:45.432 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:45.432 "is_configured": true, 00:18:45.432 "data_offset": 256, 00:18:45.432 "data_size": 7936 00:18:45.432 }, 00:18:45.432 { 00:18:45.432 "name": "BaseBdev2", 00:18:45.432 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:45.432 "is_configured": true, 00:18:45.432 "data_offset": 256, 00:18:45.432 "data_size": 7936 00:18:45.432 } 00:18:45.432 ] 00:18:45.432 }' 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.432 17:53:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.373 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.633 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.633 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.633 "name": "raid_bdev1", 00:18:46.633 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:46.633 "strip_size_kb": 0, 00:18:46.633 "state": "online", 00:18:46.633 "raid_level": "raid1", 00:18:46.633 "superblock": true, 00:18:46.633 "num_base_bdevs": 2, 00:18:46.633 "num_base_bdevs_discovered": 2, 00:18:46.633 "num_base_bdevs_operational": 2, 00:18:46.633 "process": { 00:18:46.633 "type": "rebuild", 00:18:46.633 "target": "spare", 00:18:46.633 "progress": { 00:18:46.633 "blocks": 5632, 00:18:46.633 "percent": 70 00:18:46.633 } 00:18:46.633 }, 00:18:46.633 "base_bdevs_list": [ 00:18:46.633 { 00:18:46.633 "name": "spare", 00:18:46.633 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:46.633 "is_configured": true, 00:18:46.633 "data_offset": 256, 00:18:46.634 "data_size": 7936 00:18:46.634 }, 00:18:46.634 { 00:18:46.634 "name": "BaseBdev2", 00:18:46.634 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:46.634 "is_configured": true, 00:18:46.634 "data_offset": 256, 00:18:46.634 "data_size": 7936 00:18:46.634 } 00:18:46.634 ] 00:18:46.634 }' 00:18:46.634 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.634 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.634 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.634 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.634 17:53:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:47.205 [2024-11-20 17:53:14.362465] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:47.205 [2024-11-20 17:53:14.362546] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:47.205 [2024-11-20 17:53:14.362643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.776 "name": "raid_bdev1", 00:18:47.776 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:47.776 "strip_size_kb": 0, 00:18:47.776 "state": "online", 00:18:47.776 "raid_level": "raid1", 00:18:47.776 "superblock": true, 00:18:47.776 "num_base_bdevs": 2, 00:18:47.776 "num_base_bdevs_discovered": 2, 00:18:47.776 "num_base_bdevs_operational": 2, 00:18:47.776 "base_bdevs_list": [ 00:18:47.776 { 00:18:47.776 "name": "spare", 00:18:47.776 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:47.776 "is_configured": true, 00:18:47.776 "data_offset": 256, 00:18:47.776 "data_size": 7936 00:18:47.776 }, 00:18:47.776 { 00:18:47.776 "name": "BaseBdev2", 00:18:47.776 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:47.776 "is_configured": true, 00:18:47.776 "data_offset": 256, 00:18:47.776 "data_size": 7936 00:18:47.776 } 00:18:47.776 ] 00:18:47.776 }' 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.776 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.777 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.777 "name": "raid_bdev1", 00:18:47.777 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:47.777 "strip_size_kb": 0, 00:18:47.777 "state": "online", 00:18:47.777 "raid_level": "raid1", 00:18:47.777 "superblock": true, 00:18:47.777 "num_base_bdevs": 2, 00:18:47.777 "num_base_bdevs_discovered": 2, 00:18:47.777 "num_base_bdevs_operational": 2, 00:18:47.777 "base_bdevs_list": [ 00:18:47.777 { 00:18:47.777 "name": "spare", 00:18:47.777 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:47.777 "is_configured": true, 00:18:47.777 "data_offset": 256, 00:18:47.777 "data_size": 7936 00:18:47.777 }, 00:18:47.777 { 00:18:47.777 "name": "BaseBdev2", 00:18:47.777 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:47.777 "is_configured": true, 00:18:47.777 "data_offset": 256, 00:18:47.777 "data_size": 7936 00:18:47.777 } 00:18:47.777 ] 00:18:47.777 }' 00:18:47.777 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.777 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.777 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.037 17:53:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.037 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.037 "name": "raid_bdev1", 00:18:48.037 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:48.037 "strip_size_kb": 0, 00:18:48.037 "state": "online", 00:18:48.037 "raid_level": "raid1", 00:18:48.037 "superblock": true, 00:18:48.037 "num_base_bdevs": 2, 00:18:48.037 "num_base_bdevs_discovered": 2, 00:18:48.037 "num_base_bdevs_operational": 2, 00:18:48.037 "base_bdevs_list": [ 00:18:48.037 { 00:18:48.037 "name": "spare", 00:18:48.037 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:48.037 "is_configured": true, 00:18:48.037 "data_offset": 256, 00:18:48.037 "data_size": 7936 00:18:48.037 }, 00:18:48.037 { 00:18:48.037 "name": "BaseBdev2", 00:18:48.037 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:48.037 "is_configured": true, 00:18:48.037 "data_offset": 256, 00:18:48.037 "data_size": 7936 00:18:48.037 } 00:18:48.037 ] 00:18:48.037 }' 00:18:48.037 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.037 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.298 [2024-11-20 17:53:15.401380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.298 [2024-11-20 17:53:15.401417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.298 [2024-11-20 17:53:15.401504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.298 [2024-11-20 17:53:15.401591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.298 [2024-11-20 17:53:15.401608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.298 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:48.559 /dev/nbd0 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.559 1+0 records in 00:18:48.559 1+0 records out 00:18:48.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360724 s, 11.4 MB/s 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.559 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:48.819 /dev/nbd1 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.819 1+0 records in 00:18:48.819 1+0 records out 00:18:48.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004063 s, 10.1 MB/s 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.819 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.820 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.820 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.820 17:53:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.080 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.340 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.601 [2024-11-20 17:53:16.621970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.601 [2024-11-20 17:53:16.622038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.601 [2024-11-20 17:53:16.622063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:49.601 [2024-11-20 17:53:16.622072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.601 [2024-11-20 17:53:16.624277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.601 [2024-11-20 17:53:16.624309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.601 [2024-11-20 17:53:16.624368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.601 [2024-11-20 17:53:16.624425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.601 [2024-11-20 17:53:16.624572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.601 spare 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.601 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.602 [2024-11-20 17:53:16.724454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:49.602 [2024-11-20 17:53:16.724496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:49.602 [2024-11-20 17:53:16.724594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:49.602 [2024-11-20 17:53:16.724724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:49.602 [2024-11-20 17:53:16.724733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:49.602 [2024-11-20 17:53:16.724857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.602 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.878 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.878 "name": "raid_bdev1", 00:18:49.878 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:49.878 "strip_size_kb": 0, 00:18:49.878 "state": "online", 00:18:49.878 "raid_level": "raid1", 00:18:49.878 "superblock": true, 00:18:49.878 "num_base_bdevs": 2, 00:18:49.878 "num_base_bdevs_discovered": 2, 00:18:49.878 "num_base_bdevs_operational": 2, 00:18:49.878 "base_bdevs_list": [ 00:18:49.878 { 00:18:49.878 "name": "spare", 00:18:49.878 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:49.878 "is_configured": true, 00:18:49.878 "data_offset": 256, 00:18:49.878 "data_size": 7936 00:18:49.878 }, 00:18:49.879 { 00:18:49.879 "name": "BaseBdev2", 00:18:49.879 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:49.879 "is_configured": true, 00:18:49.879 "data_offset": 256, 00:18:49.879 "data_size": 7936 00:18:49.879 } 00:18:49.879 ] 00:18:49.879 }' 00:18:49.879 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.879 17:53:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.156 "name": "raid_bdev1", 00:18:50.156 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:50.156 "strip_size_kb": 0, 00:18:50.156 "state": "online", 00:18:50.156 "raid_level": "raid1", 00:18:50.156 "superblock": true, 00:18:50.156 "num_base_bdevs": 2, 00:18:50.156 "num_base_bdevs_discovered": 2, 00:18:50.156 "num_base_bdevs_operational": 2, 00:18:50.156 "base_bdevs_list": [ 00:18:50.156 { 00:18:50.156 "name": "spare", 00:18:50.156 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:50.156 "is_configured": true, 00:18:50.156 "data_offset": 256, 00:18:50.156 "data_size": 7936 00:18:50.156 }, 00:18:50.156 { 00:18:50.156 "name": "BaseBdev2", 00:18:50.156 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:50.156 "is_configured": true, 00:18:50.156 "data_offset": 256, 00:18:50.156 "data_size": 7936 00:18:50.156 } 00:18:50.156 ] 00:18:50.156 }' 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:50.156 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.416 [2024-11-20 17:53:17.368974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.416 "name": "raid_bdev1", 00:18:50.416 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:50.416 "strip_size_kb": 0, 00:18:50.416 "state": "online", 00:18:50.416 "raid_level": "raid1", 00:18:50.416 "superblock": true, 00:18:50.416 "num_base_bdevs": 2, 00:18:50.416 "num_base_bdevs_discovered": 1, 00:18:50.416 "num_base_bdevs_operational": 1, 00:18:50.416 "base_bdevs_list": [ 00:18:50.416 { 00:18:50.416 "name": null, 00:18:50.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.416 "is_configured": false, 00:18:50.416 "data_offset": 0, 00:18:50.416 "data_size": 7936 00:18:50.416 }, 00:18:50.416 { 00:18:50.416 "name": "BaseBdev2", 00:18:50.416 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:50.416 "is_configured": true, 00:18:50.416 "data_offset": 256, 00:18:50.416 "data_size": 7936 00:18:50.416 } 00:18:50.416 ] 00:18:50.416 }' 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.416 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.677 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.677 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.677 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.677 [2024-11-20 17:53:17.820568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.677 [2024-11-20 17:53:17.820713] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.677 [2024-11-20 17:53:17.820729] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.677 [2024-11-20 17:53:17.820769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.677 [2024-11-20 17:53:17.834432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:50.677 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.677 17:53:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:50.677 [2024-11-20 17:53:17.836606] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.058 "name": "raid_bdev1", 00:18:52.058 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:52.058 "strip_size_kb": 0, 00:18:52.058 "state": "online", 00:18:52.058 "raid_level": "raid1", 00:18:52.058 "superblock": true, 00:18:52.058 "num_base_bdevs": 2, 00:18:52.058 "num_base_bdevs_discovered": 2, 00:18:52.058 "num_base_bdevs_operational": 2, 00:18:52.058 "process": { 00:18:52.058 "type": "rebuild", 00:18:52.058 "target": "spare", 00:18:52.058 "progress": { 00:18:52.058 "blocks": 2560, 00:18:52.058 "percent": 32 00:18:52.058 } 00:18:52.058 }, 00:18:52.058 "base_bdevs_list": [ 00:18:52.058 { 00:18:52.058 "name": "spare", 00:18:52.058 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:52.058 "is_configured": true, 00:18:52.058 "data_offset": 256, 00:18:52.058 "data_size": 7936 00:18:52.058 }, 00:18:52.058 { 00:18:52.058 "name": "BaseBdev2", 00:18:52.058 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:52.058 "is_configured": true, 00:18:52.058 "data_offset": 256, 00:18:52.058 "data_size": 7936 00:18:52.058 } 00:18:52.058 ] 00:18:52.058 }' 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.058 17:53:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.058 [2024-11-20 17:53:18.996488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.058 [2024-11-20 17:53:19.045196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.058 [2024-11-20 17:53:19.045297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.058 [2024-11-20 17:53:19.045314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.058 [2024-11-20 17:53:19.045336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.058 "name": "raid_bdev1", 00:18:52.058 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:52.058 "strip_size_kb": 0, 00:18:52.058 "state": "online", 00:18:52.058 "raid_level": "raid1", 00:18:52.058 "superblock": true, 00:18:52.058 "num_base_bdevs": 2, 00:18:52.058 "num_base_bdevs_discovered": 1, 00:18:52.058 "num_base_bdevs_operational": 1, 00:18:52.058 "base_bdevs_list": [ 00:18:52.058 { 00:18:52.058 "name": null, 00:18:52.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.058 "is_configured": false, 00:18:52.058 "data_offset": 0, 00:18:52.058 "data_size": 7936 00:18:52.058 }, 00:18:52.058 { 00:18:52.058 "name": "BaseBdev2", 00:18:52.058 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:52.058 "is_configured": true, 00:18:52.058 "data_offset": 256, 00:18:52.058 "data_size": 7936 00:18:52.058 } 00:18:52.058 ] 00:18:52.058 }' 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.058 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.628 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:52.628 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.628 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.628 [2024-11-20 17:53:19.513223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.628 [2024-11-20 17:53:19.513327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.628 [2024-11-20 17:53:19.513371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:52.628 [2024-11-20 17:53:19.513402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.628 [2024-11-20 17:53:19.513706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.628 [2024-11-20 17:53:19.513760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.628 [2024-11-20 17:53:19.513841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:52.628 [2024-11-20 17:53:19.513880] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.628 [2024-11-20 17:53:19.513921] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:52.628 [2024-11-20 17:53:19.513987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.628 [2024-11-20 17:53:19.527487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:52.628 spare 00:18:52.628 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.628 17:53:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:52.628 [2024-11-20 17:53:19.529644] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.569 "name": "raid_bdev1", 00:18:53.569 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:53.569 "strip_size_kb": 0, 00:18:53.569 "state": "online", 00:18:53.569 "raid_level": "raid1", 00:18:53.569 "superblock": true, 00:18:53.569 "num_base_bdevs": 2, 00:18:53.569 "num_base_bdevs_discovered": 2, 00:18:53.569 "num_base_bdevs_operational": 2, 00:18:53.569 "process": { 00:18:53.569 "type": "rebuild", 00:18:53.569 "target": "spare", 00:18:53.569 "progress": { 00:18:53.569 "blocks": 2560, 00:18:53.569 "percent": 32 00:18:53.569 } 00:18:53.569 }, 00:18:53.569 "base_bdevs_list": [ 00:18:53.569 { 00:18:53.569 "name": "spare", 00:18:53.569 "uuid": "735937a1-1c5f-57b7-a160-646a780444d0", 00:18:53.569 "is_configured": true, 00:18:53.569 "data_offset": 256, 00:18:53.569 "data_size": 7936 00:18:53.569 }, 00:18:53.569 { 00:18:53.569 "name": "BaseBdev2", 00:18:53.569 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:53.569 "is_configured": true, 00:18:53.569 "data_offset": 256, 00:18:53.569 "data_size": 7936 00:18:53.569 } 00:18:53.569 ] 00:18:53.569 }' 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.569 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.569 [2024-11-20 17:53:20.698079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.569 [2024-11-20 17:53:20.737919] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.569 [2024-11-20 17:53:20.737989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.569 [2024-11-20 17:53:20.738007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.569 [2024-11-20 17:53:20.738015] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.830 "name": "raid_bdev1", 00:18:53.830 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:53.830 "strip_size_kb": 0, 00:18:53.830 "state": "online", 00:18:53.830 "raid_level": "raid1", 00:18:53.830 "superblock": true, 00:18:53.830 "num_base_bdevs": 2, 00:18:53.830 "num_base_bdevs_discovered": 1, 00:18:53.830 "num_base_bdevs_operational": 1, 00:18:53.830 "base_bdevs_list": [ 00:18:53.830 { 00:18:53.830 "name": null, 00:18:53.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.830 "is_configured": false, 00:18:53.830 "data_offset": 0, 00:18:53.830 "data_size": 7936 00:18:53.830 }, 00:18:53.830 { 00:18:53.830 "name": "BaseBdev2", 00:18:53.830 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:53.830 "is_configured": true, 00:18:53.830 "data_offset": 256, 00:18:53.830 "data_size": 7936 00:18:53.830 } 00:18:53.830 ] 00:18:53.830 }' 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.830 17:53:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.090 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.350 "name": "raid_bdev1", 00:18:54.350 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:54.350 "strip_size_kb": 0, 00:18:54.350 "state": "online", 00:18:54.350 "raid_level": "raid1", 00:18:54.350 "superblock": true, 00:18:54.350 "num_base_bdevs": 2, 00:18:54.350 "num_base_bdevs_discovered": 1, 00:18:54.350 "num_base_bdevs_operational": 1, 00:18:54.350 "base_bdevs_list": [ 00:18:54.350 { 00:18:54.350 "name": null, 00:18:54.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.350 "is_configured": false, 00:18:54.350 "data_offset": 0, 00:18:54.350 "data_size": 7936 00:18:54.350 }, 00:18:54.350 { 00:18:54.350 "name": "BaseBdev2", 00:18:54.350 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:54.350 "is_configured": true, 00:18:54.350 "data_offset": 256, 00:18:54.350 "data_size": 7936 00:18:54.350 } 00:18:54.350 ] 00:18:54.350 }' 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.350 [2024-11-20 17:53:21.377442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:54.350 [2024-11-20 17:53:21.377554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.350 [2024-11-20 17:53:21.377583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:54.350 [2024-11-20 17:53:21.377593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.350 [2024-11-20 17:53:21.377836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.350 [2024-11-20 17:53:21.377848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:54.350 [2024-11-20 17:53:21.377901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:54.350 [2024-11-20 17:53:21.377915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:54.350 [2024-11-20 17:53:21.377925] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:54.350 [2024-11-20 17:53:21.377936] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:54.350 BaseBdev1 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.350 17:53:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.289 "name": "raid_bdev1", 00:18:55.289 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:55.289 "strip_size_kb": 0, 00:18:55.289 "state": "online", 00:18:55.289 "raid_level": "raid1", 00:18:55.289 "superblock": true, 00:18:55.289 "num_base_bdevs": 2, 00:18:55.289 "num_base_bdevs_discovered": 1, 00:18:55.289 "num_base_bdevs_operational": 1, 00:18:55.289 "base_bdevs_list": [ 00:18:55.289 { 00:18:55.289 "name": null, 00:18:55.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.289 "is_configured": false, 00:18:55.289 "data_offset": 0, 00:18:55.289 "data_size": 7936 00:18:55.289 }, 00:18:55.289 { 00:18:55.289 "name": "BaseBdev2", 00:18:55.289 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:55.289 "is_configured": true, 00:18:55.289 "data_offset": 256, 00:18:55.289 "data_size": 7936 00:18:55.289 } 00:18:55.289 ] 00:18:55.289 }' 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.289 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.858 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.858 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.858 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.858 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.859 "name": "raid_bdev1", 00:18:55.859 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:55.859 "strip_size_kb": 0, 00:18:55.859 "state": "online", 00:18:55.859 "raid_level": "raid1", 00:18:55.859 "superblock": true, 00:18:55.859 "num_base_bdevs": 2, 00:18:55.859 "num_base_bdevs_discovered": 1, 00:18:55.859 "num_base_bdevs_operational": 1, 00:18:55.859 "base_bdevs_list": [ 00:18:55.859 { 00:18:55.859 "name": null, 00:18:55.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.859 "is_configured": false, 00:18:55.859 "data_offset": 0, 00:18:55.859 "data_size": 7936 00:18:55.859 }, 00:18:55.859 { 00:18:55.859 "name": "BaseBdev2", 00:18:55.859 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:55.859 "is_configured": true, 00:18:55.859 "data_offset": 256, 00:18:55.859 "data_size": 7936 00:18:55.859 } 00:18:55.859 ] 00:18:55.859 }' 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.859 17:53:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.859 [2024-11-20 17:53:23.014637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.859 [2024-11-20 17:53:23.014774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.859 [2024-11-20 17:53:23.014789] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:55.859 request: 00:18:55.859 { 00:18:55.859 "base_bdev": "BaseBdev1", 00:18:55.859 "raid_bdev": "raid_bdev1", 00:18:55.859 "method": "bdev_raid_add_base_bdev", 00:18:55.859 "req_id": 1 00:18:55.859 } 00:18:55.859 Got JSON-RPC error response 00:18:55.859 response: 00:18:55.859 { 00:18:55.859 "code": -22, 00:18:55.859 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:55.859 } 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.859 17:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.241 "name": "raid_bdev1", 00:18:57.241 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:57.241 "strip_size_kb": 0, 00:18:57.241 "state": "online", 00:18:57.241 "raid_level": "raid1", 00:18:57.241 "superblock": true, 00:18:57.241 "num_base_bdevs": 2, 00:18:57.241 "num_base_bdevs_discovered": 1, 00:18:57.241 "num_base_bdevs_operational": 1, 00:18:57.241 "base_bdevs_list": [ 00:18:57.241 { 00:18:57.241 "name": null, 00:18:57.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.241 "is_configured": false, 00:18:57.241 "data_offset": 0, 00:18:57.241 "data_size": 7936 00:18:57.241 }, 00:18:57.241 { 00:18:57.241 "name": "BaseBdev2", 00:18:57.241 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:57.241 "is_configured": true, 00:18:57.241 "data_offset": 256, 00:18:57.241 "data_size": 7936 00:18:57.241 } 00:18:57.241 ] 00:18:57.241 }' 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.241 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.501 "name": "raid_bdev1", 00:18:57.501 "uuid": "906a461c-d3bc-49ae-9c06-5cc0197bf1f6", 00:18:57.501 "strip_size_kb": 0, 00:18:57.501 "state": "online", 00:18:57.501 "raid_level": "raid1", 00:18:57.501 "superblock": true, 00:18:57.501 "num_base_bdevs": 2, 00:18:57.501 "num_base_bdevs_discovered": 1, 00:18:57.501 "num_base_bdevs_operational": 1, 00:18:57.501 "base_bdevs_list": [ 00:18:57.501 { 00:18:57.501 "name": null, 00:18:57.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.501 "is_configured": false, 00:18:57.501 "data_offset": 0, 00:18:57.501 "data_size": 7936 00:18:57.501 }, 00:18:57.501 { 00:18:57.501 "name": "BaseBdev2", 00:18:57.501 "uuid": "f74c3b09-4388-5f89-97d3-67419dd73c2f", 00:18:57.501 "is_configured": true, 00:18:57.501 "data_offset": 256, 00:18:57.501 "data_size": 7936 00:18:57.501 } 00:18:57.501 ] 00:18:57.501 }' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88237 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88237 ']' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88237 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88237 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.501 killing process with pid 88237 00:18:57.501 Received shutdown signal, test time was about 60.000000 seconds 00:18:57.501 00:18:57.501 Latency(us) 00:18:57.501 [2024-11-20T17:53:24.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.501 [2024-11-20T17:53:24.677Z] =================================================================================================================== 00:18:57.501 [2024-11-20T17:53:24.677Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88237' 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88237 00:18:57.501 [2024-11-20 17:53:24.654146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.501 [2024-11-20 17:53:24.654261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.501 [2024-11-20 17:53:24.654307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.501 17:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88237 00:18:57.501 [2024-11-20 17:53:24.654319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:58.071 [2024-11-20 17:53:24.986976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.010 ************************************ 00:18:59.010 END TEST raid_rebuild_test_sb_md_separate 00:18:59.010 ************************************ 00:18:59.010 17:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:59.010 00:18:59.010 real 0m19.986s 00:18:59.010 user 0m26.016s 00:18:59.010 sys 0m2.785s 00:18:59.010 17:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.010 17:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.269 17:53:26 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:59.269 17:53:26 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:59.269 17:53:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:59.269 17:53:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.269 17:53:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.269 ************************************ 00:18:59.269 START TEST raid_state_function_test_sb_md_interleaved 00:18:59.269 ************************************ 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88934 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88934' 00:18:59.270 Process raid pid: 88934 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88934 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88934 ']' 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.270 17:53:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 [2024-11-20 17:53:26.323819] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:59.270 [2024-11-20 17:53:26.324020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.528 [2024-11-20 17:53:26.502881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.528 [2024-11-20 17:53:26.637041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.788 [2024-11-20 17:53:26.873992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.788 [2024-11-20 17:53:26.874137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.047 [2024-11-20 17:53:27.139351] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.047 [2024-11-20 17:53:27.139462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.047 [2024-11-20 17:53:27.139492] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.047 [2024-11-20 17:53:27.139515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:00.047 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.048 "name": "Existed_Raid", 00:19:00.048 "uuid": "17c72a31-1f96-4132-9cb0-5bf0a07af8eb", 00:19:00.048 "strip_size_kb": 0, 00:19:00.048 "state": "configuring", 00:19:00.048 "raid_level": "raid1", 00:19:00.048 "superblock": true, 00:19:00.048 "num_base_bdevs": 2, 00:19:00.048 "num_base_bdevs_discovered": 0, 00:19:00.048 "num_base_bdevs_operational": 2, 00:19:00.048 "base_bdevs_list": [ 00:19:00.048 { 00:19:00.048 "name": "BaseBdev1", 00:19:00.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.048 "is_configured": false, 00:19:00.048 "data_offset": 0, 00:19:00.048 "data_size": 0 00:19:00.048 }, 00:19:00.048 { 00:19:00.048 "name": "BaseBdev2", 00:19:00.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.048 "is_configured": false, 00:19:00.048 "data_offset": 0, 00:19:00.048 "data_size": 0 00:19:00.048 } 00:19:00.048 ] 00:19:00.048 }' 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.048 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 [2024-11-20 17:53:27.586523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.617 [2024-11-20 17:53:27.586557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 [2024-11-20 17:53:27.598508] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.617 [2024-11-20 17:53:27.598548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.617 [2024-11-20 17:53:27.598557] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.617 [2024-11-20 17:53:27.598569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 [2024-11-20 17:53:27.647121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.617 BaseBdev1 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 [ 00:19:00.617 { 00:19:00.617 "name": "BaseBdev1", 00:19:00.617 "aliases": [ 00:19:00.617 "179432b7-f51d-4864-b255-e1c1074dc9c5" 00:19:00.617 ], 00:19:00.617 "product_name": "Malloc disk", 00:19:00.617 "block_size": 4128, 00:19:00.617 "num_blocks": 8192, 00:19:00.617 "uuid": "179432b7-f51d-4864-b255-e1c1074dc9c5", 00:19:00.617 "md_size": 32, 00:19:00.617 "md_interleave": true, 00:19:00.617 "dif_type": 0, 00:19:00.617 "assigned_rate_limits": { 00:19:00.617 "rw_ios_per_sec": 0, 00:19:00.617 "rw_mbytes_per_sec": 0, 00:19:00.617 "r_mbytes_per_sec": 0, 00:19:00.617 "w_mbytes_per_sec": 0 00:19:00.617 }, 00:19:00.617 "claimed": true, 00:19:00.617 "claim_type": "exclusive_write", 00:19:00.617 "zoned": false, 00:19:00.617 "supported_io_types": { 00:19:00.617 "read": true, 00:19:00.617 "write": true, 00:19:00.617 "unmap": true, 00:19:00.617 "flush": true, 00:19:00.617 "reset": true, 00:19:00.617 "nvme_admin": false, 00:19:00.617 "nvme_io": false, 00:19:00.617 "nvme_io_md": false, 00:19:00.617 "write_zeroes": true, 00:19:00.617 "zcopy": true, 00:19:00.617 "get_zone_info": false, 00:19:00.617 "zone_management": false, 00:19:00.617 "zone_append": false, 00:19:00.617 "compare": false, 00:19:00.617 "compare_and_write": false, 00:19:00.617 "abort": true, 00:19:00.617 "seek_hole": false, 00:19:00.617 "seek_data": false, 00:19:00.617 "copy": true, 00:19:00.617 "nvme_iov_md": false 00:19:00.617 }, 00:19:00.617 "memory_domains": [ 00:19:00.617 { 00:19:00.617 "dma_device_id": "system", 00:19:00.617 "dma_device_type": 1 00:19:00.617 }, 00:19:00.617 { 00:19:00.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.617 "dma_device_type": 2 00:19:00.617 } 00:19:00.617 ], 00:19:00.617 "driver_specific": {} 00:19:00.617 } 00:19:00.617 ] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.617 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.618 "name": "Existed_Raid", 00:19:00.618 "uuid": "ab56ca84-161f-4a7c-ad09-a1e43856853a", 00:19:00.618 "strip_size_kb": 0, 00:19:00.618 "state": "configuring", 00:19:00.618 "raid_level": "raid1", 00:19:00.618 "superblock": true, 00:19:00.618 "num_base_bdevs": 2, 00:19:00.618 "num_base_bdevs_discovered": 1, 00:19:00.618 "num_base_bdevs_operational": 2, 00:19:00.618 "base_bdevs_list": [ 00:19:00.618 { 00:19:00.618 "name": "BaseBdev1", 00:19:00.618 "uuid": "179432b7-f51d-4864-b255-e1c1074dc9c5", 00:19:00.618 "is_configured": true, 00:19:00.618 "data_offset": 256, 00:19:00.618 "data_size": 7936 00:19:00.618 }, 00:19:00.618 { 00:19:00.618 "name": "BaseBdev2", 00:19:00.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.618 "is_configured": false, 00:19:00.618 "data_offset": 0, 00:19:00.618 "data_size": 0 00:19:00.618 } 00:19:00.618 ] 00:19:00.618 }' 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.618 17:53:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.187 [2024-11-20 17:53:28.162295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.187 [2024-11-20 17:53:28.162335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.187 [2024-11-20 17:53:28.174333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.187 [2024-11-20 17:53:28.176341] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.187 [2024-11-20 17:53:28.176383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.187 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.187 "name": "Existed_Raid", 00:19:01.187 "uuid": "b2d2d0dc-4954-4edd-b187-7dd2f22764c9", 00:19:01.187 "strip_size_kb": 0, 00:19:01.187 "state": "configuring", 00:19:01.187 "raid_level": "raid1", 00:19:01.187 "superblock": true, 00:19:01.187 "num_base_bdevs": 2, 00:19:01.187 "num_base_bdevs_discovered": 1, 00:19:01.187 "num_base_bdevs_operational": 2, 00:19:01.187 "base_bdevs_list": [ 00:19:01.187 { 00:19:01.187 "name": "BaseBdev1", 00:19:01.187 "uuid": "179432b7-f51d-4864-b255-e1c1074dc9c5", 00:19:01.187 "is_configured": true, 00:19:01.187 "data_offset": 256, 00:19:01.187 "data_size": 7936 00:19:01.187 }, 00:19:01.187 { 00:19:01.187 "name": "BaseBdev2", 00:19:01.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.187 "is_configured": false, 00:19:01.187 "data_offset": 0, 00:19:01.187 "data_size": 0 00:19:01.188 } 00:19:01.188 ] 00:19:01.188 }' 00:19:01.188 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.188 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.759 [2024-11-20 17:53:28.688945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.759 [2024-11-20 17:53:28.689291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:01.759 [2024-11-20 17:53:28.689347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:01.759 [2024-11-20 17:53:28.689471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:01.759 [2024-11-20 17:53:28.689589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:01.759 [2024-11-20 17:53:28.689628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:01.759 [2024-11-20 17:53:28.689746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.759 BaseBdev2 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.759 [ 00:19:01.759 { 00:19:01.759 "name": "BaseBdev2", 00:19:01.759 "aliases": [ 00:19:01.759 "f1861890-2c84-4584-88f6-1fdf363ea9ff" 00:19:01.759 ], 00:19:01.759 "product_name": "Malloc disk", 00:19:01.759 "block_size": 4128, 00:19:01.759 "num_blocks": 8192, 00:19:01.759 "uuid": "f1861890-2c84-4584-88f6-1fdf363ea9ff", 00:19:01.759 "md_size": 32, 00:19:01.759 "md_interleave": true, 00:19:01.759 "dif_type": 0, 00:19:01.759 "assigned_rate_limits": { 00:19:01.759 "rw_ios_per_sec": 0, 00:19:01.759 "rw_mbytes_per_sec": 0, 00:19:01.759 "r_mbytes_per_sec": 0, 00:19:01.759 "w_mbytes_per_sec": 0 00:19:01.759 }, 00:19:01.759 "claimed": true, 00:19:01.759 "claim_type": "exclusive_write", 00:19:01.759 "zoned": false, 00:19:01.759 "supported_io_types": { 00:19:01.759 "read": true, 00:19:01.759 "write": true, 00:19:01.759 "unmap": true, 00:19:01.759 "flush": true, 00:19:01.759 "reset": true, 00:19:01.759 "nvme_admin": false, 00:19:01.759 "nvme_io": false, 00:19:01.759 "nvme_io_md": false, 00:19:01.759 "write_zeroes": true, 00:19:01.759 "zcopy": true, 00:19:01.759 "get_zone_info": false, 00:19:01.759 "zone_management": false, 00:19:01.759 "zone_append": false, 00:19:01.759 "compare": false, 00:19:01.759 "compare_and_write": false, 00:19:01.759 "abort": true, 00:19:01.759 "seek_hole": false, 00:19:01.759 "seek_data": false, 00:19:01.759 "copy": true, 00:19:01.759 "nvme_iov_md": false 00:19:01.759 }, 00:19:01.759 "memory_domains": [ 00:19:01.759 { 00:19:01.759 "dma_device_id": "system", 00:19:01.759 "dma_device_type": 1 00:19:01.759 }, 00:19:01.759 { 00:19:01.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.759 "dma_device_type": 2 00:19:01.759 } 00:19:01.759 ], 00:19:01.759 "driver_specific": {} 00:19:01.759 } 00:19:01.759 ] 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:01.759 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.760 "name": "Existed_Raid", 00:19:01.760 "uuid": "b2d2d0dc-4954-4edd-b187-7dd2f22764c9", 00:19:01.760 "strip_size_kb": 0, 00:19:01.760 "state": "online", 00:19:01.760 "raid_level": "raid1", 00:19:01.760 "superblock": true, 00:19:01.760 "num_base_bdevs": 2, 00:19:01.760 "num_base_bdevs_discovered": 2, 00:19:01.760 "num_base_bdevs_operational": 2, 00:19:01.760 "base_bdevs_list": [ 00:19:01.760 { 00:19:01.760 "name": "BaseBdev1", 00:19:01.760 "uuid": "179432b7-f51d-4864-b255-e1c1074dc9c5", 00:19:01.760 "is_configured": true, 00:19:01.760 "data_offset": 256, 00:19:01.760 "data_size": 7936 00:19:01.760 }, 00:19:01.760 { 00:19:01.760 "name": "BaseBdev2", 00:19:01.760 "uuid": "f1861890-2c84-4584-88f6-1fdf363ea9ff", 00:19:01.760 "is_configured": true, 00:19:01.760 "data_offset": 256, 00:19:01.760 "data_size": 7936 00:19:01.760 } 00:19:01.760 ] 00:19:01.760 }' 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.760 17:53:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.019 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.019 [2024-11-20 17:53:29.184407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.280 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.280 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.280 "name": "Existed_Raid", 00:19:02.280 "aliases": [ 00:19:02.280 "b2d2d0dc-4954-4edd-b187-7dd2f22764c9" 00:19:02.280 ], 00:19:02.280 "product_name": "Raid Volume", 00:19:02.280 "block_size": 4128, 00:19:02.280 "num_blocks": 7936, 00:19:02.280 "uuid": "b2d2d0dc-4954-4edd-b187-7dd2f22764c9", 00:19:02.280 "md_size": 32, 00:19:02.280 "md_interleave": true, 00:19:02.280 "dif_type": 0, 00:19:02.280 "assigned_rate_limits": { 00:19:02.280 "rw_ios_per_sec": 0, 00:19:02.280 "rw_mbytes_per_sec": 0, 00:19:02.280 "r_mbytes_per_sec": 0, 00:19:02.280 "w_mbytes_per_sec": 0 00:19:02.280 }, 00:19:02.280 "claimed": false, 00:19:02.280 "zoned": false, 00:19:02.280 "supported_io_types": { 00:19:02.280 "read": true, 00:19:02.280 "write": true, 00:19:02.280 "unmap": false, 00:19:02.280 "flush": false, 00:19:02.280 "reset": true, 00:19:02.280 "nvme_admin": false, 00:19:02.280 "nvme_io": false, 00:19:02.280 "nvme_io_md": false, 00:19:02.280 "write_zeroes": true, 00:19:02.280 "zcopy": false, 00:19:02.280 "get_zone_info": false, 00:19:02.280 "zone_management": false, 00:19:02.280 "zone_append": false, 00:19:02.280 "compare": false, 00:19:02.280 "compare_and_write": false, 00:19:02.280 "abort": false, 00:19:02.280 "seek_hole": false, 00:19:02.280 "seek_data": false, 00:19:02.280 "copy": false, 00:19:02.280 "nvme_iov_md": false 00:19:02.280 }, 00:19:02.280 "memory_domains": [ 00:19:02.280 { 00:19:02.280 "dma_device_id": "system", 00:19:02.280 "dma_device_type": 1 00:19:02.281 }, 00:19:02.281 { 00:19:02.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.281 "dma_device_type": 2 00:19:02.281 }, 00:19:02.281 { 00:19:02.281 "dma_device_id": "system", 00:19:02.281 "dma_device_type": 1 00:19:02.281 }, 00:19:02.281 { 00:19:02.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.281 "dma_device_type": 2 00:19:02.281 } 00:19:02.281 ], 00:19:02.281 "driver_specific": { 00:19:02.281 "raid": { 00:19:02.281 "uuid": "b2d2d0dc-4954-4edd-b187-7dd2f22764c9", 00:19:02.281 "strip_size_kb": 0, 00:19:02.281 "state": "online", 00:19:02.281 "raid_level": "raid1", 00:19:02.281 "superblock": true, 00:19:02.281 "num_base_bdevs": 2, 00:19:02.281 "num_base_bdevs_discovered": 2, 00:19:02.281 "num_base_bdevs_operational": 2, 00:19:02.281 "base_bdevs_list": [ 00:19:02.281 { 00:19:02.281 "name": "BaseBdev1", 00:19:02.281 "uuid": "179432b7-f51d-4864-b255-e1c1074dc9c5", 00:19:02.281 "is_configured": true, 00:19:02.281 "data_offset": 256, 00:19:02.281 "data_size": 7936 00:19:02.281 }, 00:19:02.281 { 00:19:02.281 "name": "BaseBdev2", 00:19:02.281 "uuid": "f1861890-2c84-4584-88f6-1fdf363ea9ff", 00:19:02.281 "is_configured": true, 00:19:02.281 "data_offset": 256, 00:19:02.281 "data_size": 7936 00:19:02.281 } 00:19:02.281 ] 00:19:02.281 } 00:19:02.281 } 00:19:02.281 }' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:02.281 BaseBdev2' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.281 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 [2024-11-20 17:53:29.427867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.541 "name": "Existed_Raid", 00:19:02.541 "uuid": "b2d2d0dc-4954-4edd-b187-7dd2f22764c9", 00:19:02.541 "strip_size_kb": 0, 00:19:02.541 "state": "online", 00:19:02.541 "raid_level": "raid1", 00:19:02.541 "superblock": true, 00:19:02.541 "num_base_bdevs": 2, 00:19:02.541 "num_base_bdevs_discovered": 1, 00:19:02.541 "num_base_bdevs_operational": 1, 00:19:02.541 "base_bdevs_list": [ 00:19:02.541 { 00:19:02.541 "name": null, 00:19:02.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.541 "is_configured": false, 00:19:02.541 "data_offset": 0, 00:19:02.541 "data_size": 7936 00:19:02.541 }, 00:19:02.541 { 00:19:02.541 "name": "BaseBdev2", 00:19:02.541 "uuid": "f1861890-2c84-4584-88f6-1fdf363ea9ff", 00:19:02.541 "is_configured": true, 00:19:02.541 "data_offset": 256, 00:19:02.541 "data_size": 7936 00:19:02.541 } 00:19:02.541 ] 00:19:02.541 }' 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.541 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.801 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:02.801 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:02.801 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.801 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.801 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.061 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:03.061 17:53:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.061 [2024-11-20 17:53:30.024094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:03.061 [2024-11-20 17:53:30.024273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.061 [2024-11-20 17:53:30.125609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.061 [2024-11-20 17:53:30.125667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.061 [2024-11-20 17:53:30.125680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88934 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88934 ']' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88934 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88934 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.061 killing process with pid 88934 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88934' 00:19:03.061 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88934 00:19:03.061 [2024-11-20 17:53:30.224900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.062 17:53:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88934 00:19:03.321 [2024-11-20 17:53:30.241992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.261 17:53:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:04.261 00:19:04.261 real 0m5.184s 00:19:04.261 user 0m7.302s 00:19:04.261 sys 0m1.027s 00:19:04.261 17:53:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.261 17:53:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.261 ************************************ 00:19:04.261 END TEST raid_state_function_test_sb_md_interleaved 00:19:04.261 ************************************ 00:19:04.522 17:53:31 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:04.522 17:53:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:04.522 17:53:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.522 17:53:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.522 ************************************ 00:19:04.522 START TEST raid_superblock_test_md_interleaved 00:19:04.522 ************************************ 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89182 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89182 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89182 ']' 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.522 17:53:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.522 [2024-11-20 17:53:31.570392] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:04.522 [2024-11-20 17:53:31.570537] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89182 ] 00:19:04.783 [2024-11-20 17:53:31.748303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.783 [2024-11-20 17:53:31.874807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.043 [2024-11-20 17:53:32.100092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.043 [2024-11-20 17:53:32.100130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.303 malloc1 00:19:05.303 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.304 [2024-11-20 17:53:32.434346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.304 [2024-11-20 17:53:32.434423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.304 [2024-11-20 17:53:32.434447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:05.304 [2024-11-20 17:53:32.434457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.304 [2024-11-20 17:53:32.436580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.304 [2024-11-20 17:53:32.436613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.304 pt1 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.304 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.564 malloc2 00:19:05.564 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.564 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.564 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.564 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.564 [2024-11-20 17:53:32.499450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.565 [2024-11-20 17:53:32.499521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.565 [2024-11-20 17:53:32.499545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:05.565 [2024-11-20 17:53:32.499554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.565 [2024-11-20 17:53:32.501660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.565 [2024-11-20 17:53:32.501703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.565 pt2 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.565 [2024-11-20 17:53:32.511454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.565 [2024-11-20 17:53:32.513537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.565 [2024-11-20 17:53:32.513748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:05.565 [2024-11-20 17:53:32.513761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.565 [2024-11-20 17:53:32.513844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:05.565 [2024-11-20 17:53:32.513933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:05.565 [2024-11-20 17:53:32.513963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:05.565 [2024-11-20 17:53:32.514045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.565 "name": "raid_bdev1", 00:19:05.565 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:05.565 "strip_size_kb": 0, 00:19:05.565 "state": "online", 00:19:05.565 "raid_level": "raid1", 00:19:05.565 "superblock": true, 00:19:05.565 "num_base_bdevs": 2, 00:19:05.565 "num_base_bdevs_discovered": 2, 00:19:05.565 "num_base_bdevs_operational": 2, 00:19:05.565 "base_bdevs_list": [ 00:19:05.565 { 00:19:05.565 "name": "pt1", 00:19:05.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.565 "is_configured": true, 00:19:05.565 "data_offset": 256, 00:19:05.565 "data_size": 7936 00:19:05.565 }, 00:19:05.565 { 00:19:05.565 "name": "pt2", 00:19:05.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.565 "is_configured": true, 00:19:05.565 "data_offset": 256, 00:19:05.565 "data_size": 7936 00:19:05.565 } 00:19:05.565 ] 00:19:05.565 }' 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.565 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.824 [2024-11-20 17:53:32.934947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.824 "name": "raid_bdev1", 00:19:05.824 "aliases": [ 00:19:05.824 "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6" 00:19:05.824 ], 00:19:05.824 "product_name": "Raid Volume", 00:19:05.824 "block_size": 4128, 00:19:05.824 "num_blocks": 7936, 00:19:05.824 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:05.824 "md_size": 32, 00:19:05.824 "md_interleave": true, 00:19:05.824 "dif_type": 0, 00:19:05.824 "assigned_rate_limits": { 00:19:05.824 "rw_ios_per_sec": 0, 00:19:05.824 "rw_mbytes_per_sec": 0, 00:19:05.824 "r_mbytes_per_sec": 0, 00:19:05.824 "w_mbytes_per_sec": 0 00:19:05.824 }, 00:19:05.824 "claimed": false, 00:19:05.824 "zoned": false, 00:19:05.824 "supported_io_types": { 00:19:05.824 "read": true, 00:19:05.824 "write": true, 00:19:05.824 "unmap": false, 00:19:05.824 "flush": false, 00:19:05.824 "reset": true, 00:19:05.824 "nvme_admin": false, 00:19:05.824 "nvme_io": false, 00:19:05.824 "nvme_io_md": false, 00:19:05.824 "write_zeroes": true, 00:19:05.824 "zcopy": false, 00:19:05.824 "get_zone_info": false, 00:19:05.824 "zone_management": false, 00:19:05.824 "zone_append": false, 00:19:05.824 "compare": false, 00:19:05.824 "compare_and_write": false, 00:19:05.824 "abort": false, 00:19:05.824 "seek_hole": false, 00:19:05.824 "seek_data": false, 00:19:05.824 "copy": false, 00:19:05.824 "nvme_iov_md": false 00:19:05.824 }, 00:19:05.824 "memory_domains": [ 00:19:05.824 { 00:19:05.824 "dma_device_id": "system", 00:19:05.824 "dma_device_type": 1 00:19:05.824 }, 00:19:05.824 { 00:19:05.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.824 "dma_device_type": 2 00:19:05.824 }, 00:19:05.824 { 00:19:05.824 "dma_device_id": "system", 00:19:05.824 "dma_device_type": 1 00:19:05.824 }, 00:19:05.824 { 00:19:05.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.824 "dma_device_type": 2 00:19:05.824 } 00:19:05.824 ], 00:19:05.824 "driver_specific": { 00:19:05.824 "raid": { 00:19:05.824 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:05.824 "strip_size_kb": 0, 00:19:05.824 "state": "online", 00:19:05.824 "raid_level": "raid1", 00:19:05.824 "superblock": true, 00:19:05.824 "num_base_bdevs": 2, 00:19:05.824 "num_base_bdevs_discovered": 2, 00:19:05.824 "num_base_bdevs_operational": 2, 00:19:05.824 "base_bdevs_list": [ 00:19:05.824 { 00:19:05.824 "name": "pt1", 00:19:05.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.824 "is_configured": true, 00:19:05.824 "data_offset": 256, 00:19:05.824 "data_size": 7936 00:19:05.824 }, 00:19:05.824 { 00:19:05.824 "name": "pt2", 00:19:05.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.824 "is_configured": true, 00:19:05.824 "data_offset": 256, 00:19:05.824 "data_size": 7936 00:19:05.824 } 00:19:05.824 ] 00:19:05.824 } 00:19:05.824 } 00:19:05.824 }' 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:05.824 pt2' 00:19:05.824 17:53:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 [2024-11-20 17:53:33.130572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=df2c553a-6bf6-4a71-bbb5-3424ae0dfae6 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z df2c553a-6bf6-4a71-bbb5-3424ae0dfae6 ']' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 [2024-11-20 17:53:33.166255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.084 [2024-11-20 17:53:33.166280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.084 [2024-11-20 17:53:33.166355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.084 [2024-11-20 17:53:33.166402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.084 [2024-11-20 17:53:33.166421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.084 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.345 [2024-11-20 17:53:33.282105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:06.345 [2024-11-20 17:53:33.284215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:06.345 [2024-11-20 17:53:33.284280] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:06.345 [2024-11-20 17:53:33.284323] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:06.345 [2024-11-20 17:53:33.284336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.345 [2024-11-20 17:53:33.284345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:06.345 request: 00:19:06.345 { 00:19:06.345 "name": "raid_bdev1", 00:19:06.345 "raid_level": "raid1", 00:19:06.345 "base_bdevs": [ 00:19:06.345 "malloc1", 00:19:06.345 "malloc2" 00:19:06.345 ], 00:19:06.345 "superblock": false, 00:19:06.345 "method": "bdev_raid_create", 00:19:06.345 "req_id": 1 00:19:06.345 } 00:19:06.345 Got JSON-RPC error response 00:19:06.345 response: 00:19:06.345 { 00:19:06.345 "code": -17, 00:19:06.345 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:06.345 } 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.345 [2024-11-20 17:53:33.345970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.345 [2024-11-20 17:53:33.346022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.345 [2024-11-20 17:53:33.346036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:06.345 [2024-11-20 17:53:33.346046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.345 [2024-11-20 17:53:33.348156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.345 [2024-11-20 17:53:33.348186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.345 [2024-11-20 17:53:33.348226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:06.345 [2024-11-20 17:53:33.348277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.345 pt1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.345 "name": "raid_bdev1", 00:19:06.345 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:06.345 "strip_size_kb": 0, 00:19:06.345 "state": "configuring", 00:19:06.345 "raid_level": "raid1", 00:19:06.345 "superblock": true, 00:19:06.345 "num_base_bdevs": 2, 00:19:06.345 "num_base_bdevs_discovered": 1, 00:19:06.345 "num_base_bdevs_operational": 2, 00:19:06.345 "base_bdevs_list": [ 00:19:06.345 { 00:19:06.345 "name": "pt1", 00:19:06.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.345 "is_configured": true, 00:19:06.345 "data_offset": 256, 00:19:06.345 "data_size": 7936 00:19:06.345 }, 00:19:06.345 { 00:19:06.345 "name": null, 00:19:06.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.345 "is_configured": false, 00:19:06.345 "data_offset": 256, 00:19:06.345 "data_size": 7936 00:19:06.345 } 00:19:06.345 ] 00:19:06.345 }' 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.345 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.606 [2024-11-20 17:53:33.749260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.606 [2024-11-20 17:53:33.749306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.606 [2024-11-20 17:53:33.749321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:06.606 [2024-11-20 17:53:33.749330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.606 [2024-11-20 17:53:33.749424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.606 [2024-11-20 17:53:33.749437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.606 [2024-11-20 17:53:33.749470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:06.606 [2024-11-20 17:53:33.749487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.606 [2024-11-20 17:53:33.749554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:06.606 [2024-11-20 17:53:33.749565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:06.606 [2024-11-20 17:53:33.749629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:06.606 [2024-11-20 17:53:33.749689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:06.606 [2024-11-20 17:53:33.749696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:06.606 [2024-11-20 17:53:33.749748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.606 pt2 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.606 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.867 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.867 "name": "raid_bdev1", 00:19:06.867 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:06.867 "strip_size_kb": 0, 00:19:06.867 "state": "online", 00:19:06.867 "raid_level": "raid1", 00:19:06.867 "superblock": true, 00:19:06.867 "num_base_bdevs": 2, 00:19:06.867 "num_base_bdevs_discovered": 2, 00:19:06.867 "num_base_bdevs_operational": 2, 00:19:06.867 "base_bdevs_list": [ 00:19:06.867 { 00:19:06.867 "name": "pt1", 00:19:06.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.867 "is_configured": true, 00:19:06.867 "data_offset": 256, 00:19:06.867 "data_size": 7936 00:19:06.867 }, 00:19:06.867 { 00:19:06.867 "name": "pt2", 00:19:06.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.867 "is_configured": true, 00:19:06.867 "data_offset": 256, 00:19:06.867 "data_size": 7936 00:19:06.867 } 00:19:06.867 ] 00:19:06.867 }' 00:19:06.867 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.867 17:53:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.125 [2024-11-20 17:53:34.184935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.125 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.125 "name": "raid_bdev1", 00:19:07.125 "aliases": [ 00:19:07.125 "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6" 00:19:07.125 ], 00:19:07.125 "product_name": "Raid Volume", 00:19:07.125 "block_size": 4128, 00:19:07.125 "num_blocks": 7936, 00:19:07.125 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:07.125 "md_size": 32, 00:19:07.125 "md_interleave": true, 00:19:07.125 "dif_type": 0, 00:19:07.125 "assigned_rate_limits": { 00:19:07.125 "rw_ios_per_sec": 0, 00:19:07.125 "rw_mbytes_per_sec": 0, 00:19:07.125 "r_mbytes_per_sec": 0, 00:19:07.125 "w_mbytes_per_sec": 0 00:19:07.125 }, 00:19:07.125 "claimed": false, 00:19:07.125 "zoned": false, 00:19:07.125 "supported_io_types": { 00:19:07.125 "read": true, 00:19:07.125 "write": true, 00:19:07.125 "unmap": false, 00:19:07.125 "flush": false, 00:19:07.125 "reset": true, 00:19:07.125 "nvme_admin": false, 00:19:07.125 "nvme_io": false, 00:19:07.125 "nvme_io_md": false, 00:19:07.125 "write_zeroes": true, 00:19:07.125 "zcopy": false, 00:19:07.125 "get_zone_info": false, 00:19:07.125 "zone_management": false, 00:19:07.125 "zone_append": false, 00:19:07.125 "compare": false, 00:19:07.125 "compare_and_write": false, 00:19:07.125 "abort": false, 00:19:07.125 "seek_hole": false, 00:19:07.125 "seek_data": false, 00:19:07.125 "copy": false, 00:19:07.126 "nvme_iov_md": false 00:19:07.126 }, 00:19:07.126 "memory_domains": [ 00:19:07.126 { 00:19:07.126 "dma_device_id": "system", 00:19:07.126 "dma_device_type": 1 00:19:07.126 }, 00:19:07.126 { 00:19:07.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.126 "dma_device_type": 2 00:19:07.126 }, 00:19:07.126 { 00:19:07.126 "dma_device_id": "system", 00:19:07.126 "dma_device_type": 1 00:19:07.126 }, 00:19:07.126 { 00:19:07.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.126 "dma_device_type": 2 00:19:07.126 } 00:19:07.126 ], 00:19:07.126 "driver_specific": { 00:19:07.126 "raid": { 00:19:07.126 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:07.126 "strip_size_kb": 0, 00:19:07.126 "state": "online", 00:19:07.126 "raid_level": "raid1", 00:19:07.126 "superblock": true, 00:19:07.126 "num_base_bdevs": 2, 00:19:07.126 "num_base_bdevs_discovered": 2, 00:19:07.126 "num_base_bdevs_operational": 2, 00:19:07.126 "base_bdevs_list": [ 00:19:07.126 { 00:19:07.126 "name": "pt1", 00:19:07.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.126 "is_configured": true, 00:19:07.126 "data_offset": 256, 00:19:07.126 "data_size": 7936 00:19:07.126 }, 00:19:07.126 { 00:19:07.126 "name": "pt2", 00:19:07.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.126 "is_configured": true, 00:19:07.126 "data_offset": 256, 00:19:07.126 "data_size": 7936 00:19:07.126 } 00:19:07.126 ] 00:19:07.126 } 00:19:07.126 } 00:19:07.126 }' 00:19:07.126 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.126 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:07.126 pt2' 00:19:07.126 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:07.384 [2024-11-20 17:53:34.408532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.384 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' df2c553a-6bf6-4a71-bbb5-3424ae0dfae6 '!=' df2c553a-6bf6-4a71-bbb5-3424ae0dfae6 ']' 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.385 [2024-11-20 17:53:34.456248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.385 "name": "raid_bdev1", 00:19:07.385 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:07.385 "strip_size_kb": 0, 00:19:07.385 "state": "online", 00:19:07.385 "raid_level": "raid1", 00:19:07.385 "superblock": true, 00:19:07.385 "num_base_bdevs": 2, 00:19:07.385 "num_base_bdevs_discovered": 1, 00:19:07.385 "num_base_bdevs_operational": 1, 00:19:07.385 "base_bdevs_list": [ 00:19:07.385 { 00:19:07.385 "name": null, 00:19:07.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.385 "is_configured": false, 00:19:07.385 "data_offset": 0, 00:19:07.385 "data_size": 7936 00:19:07.385 }, 00:19:07.385 { 00:19:07.385 "name": "pt2", 00:19:07.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.385 "is_configured": true, 00:19:07.385 "data_offset": 256, 00:19:07.385 "data_size": 7936 00:19:07.385 } 00:19:07.385 ] 00:19:07.385 }' 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.385 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.952 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.952 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.952 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.952 [2024-11-20 17:53:34.939439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.952 [2024-11-20 17:53:34.939463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.952 [2024-11-20 17:53:34.939512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.952 [2024-11-20 17:53:34.939545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.952 [2024-11-20 17:53:34.939556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.953 17:53:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.953 [2024-11-20 17:53:35.011333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.953 [2024-11-20 17:53:35.011374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.953 [2024-11-20 17:53:35.011386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:07.953 [2024-11-20 17:53:35.011396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.953 [2024-11-20 17:53:35.013542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.953 [2024-11-20 17:53:35.013575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.953 [2024-11-20 17:53:35.013614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.953 [2024-11-20 17:53:35.013660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.953 [2024-11-20 17:53:35.013709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:07.953 [2024-11-20 17:53:35.013720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:07.953 [2024-11-20 17:53:35.013796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:07.953 [2024-11-20 17:53:35.013860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:07.953 [2024-11-20 17:53:35.013867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:07.953 [2024-11-20 17:53:35.013917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.953 pt2 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.953 "name": "raid_bdev1", 00:19:07.953 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:07.953 "strip_size_kb": 0, 00:19:07.953 "state": "online", 00:19:07.953 "raid_level": "raid1", 00:19:07.953 "superblock": true, 00:19:07.953 "num_base_bdevs": 2, 00:19:07.953 "num_base_bdevs_discovered": 1, 00:19:07.953 "num_base_bdevs_operational": 1, 00:19:07.953 "base_bdevs_list": [ 00:19:07.953 { 00:19:07.953 "name": null, 00:19:07.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.953 "is_configured": false, 00:19:07.953 "data_offset": 256, 00:19:07.953 "data_size": 7936 00:19:07.953 }, 00:19:07.953 { 00:19:07.953 "name": "pt2", 00:19:07.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.953 "is_configured": true, 00:19:07.953 "data_offset": 256, 00:19:07.953 "data_size": 7936 00:19:07.953 } 00:19:07.953 ] 00:19:07.953 }' 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.953 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 [2024-11-20 17:53:35.418600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.522 [2024-11-20 17:53:35.418626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.522 [2024-11-20 17:53:35.418670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.522 [2024-11-20 17:53:35.418703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.522 [2024-11-20 17:53:35.418711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 [2024-11-20 17:53:35.478535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:08.522 [2024-11-20 17:53:35.478575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.522 [2024-11-20 17:53:35.478590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:08.522 [2024-11-20 17:53:35.478598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.522 [2024-11-20 17:53:35.480711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.522 [2024-11-20 17:53:35.480740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:08.522 [2024-11-20 17:53:35.480779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:08.522 [2024-11-20 17:53:35.480821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:08.522 [2024-11-20 17:53:35.480906] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:08.522 [2024-11-20 17:53:35.480920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.522 [2024-11-20 17:53:35.480934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:08.522 [2024-11-20 17:53:35.481007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.522 [2024-11-20 17:53:35.481079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:08.522 [2024-11-20 17:53:35.481087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:08.522 [2024-11-20 17:53:35.481147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.522 [2024-11-20 17:53:35.481197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:08.522 [2024-11-20 17:53:35.481206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:08.522 [2024-11-20 17:53:35.481266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.522 pt1 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.522 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.523 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.523 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.523 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.523 "name": "raid_bdev1", 00:19:08.523 "uuid": "df2c553a-6bf6-4a71-bbb5-3424ae0dfae6", 00:19:08.523 "strip_size_kb": 0, 00:19:08.523 "state": "online", 00:19:08.523 "raid_level": "raid1", 00:19:08.523 "superblock": true, 00:19:08.523 "num_base_bdevs": 2, 00:19:08.523 "num_base_bdevs_discovered": 1, 00:19:08.523 "num_base_bdevs_operational": 1, 00:19:08.523 "base_bdevs_list": [ 00:19:08.523 { 00:19:08.523 "name": null, 00:19:08.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.523 "is_configured": false, 00:19:08.523 "data_offset": 256, 00:19:08.523 "data_size": 7936 00:19:08.523 }, 00:19:08.523 { 00:19:08.523 "name": "pt2", 00:19:08.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.523 "is_configured": true, 00:19:08.523 "data_offset": 256, 00:19:08.523 "data_size": 7936 00:19:08.523 } 00:19:08.523 ] 00:19:08.523 }' 00:19:08.523 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.523 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:08.783 [2024-11-20 17:53:35.925925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.783 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' df2c553a-6bf6-4a71-bbb5-3424ae0dfae6 '!=' df2c553a-6bf6-4a71-bbb5-3424ae0dfae6 ']' 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89182 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89182 ']' 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89182 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.043 17:53:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89182 00:19:09.043 killing process with pid 89182 00:19:09.043 17:53:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.043 17:53:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.043 17:53:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89182' 00:19:09.043 17:53:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89182 00:19:09.043 [2024-11-20 17:53:36.003737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.043 [2024-11-20 17:53:36.003795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.043 [2024-11-20 17:53:36.003826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.043 [2024-11-20 17:53:36.003838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:09.043 17:53:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89182 00:19:09.303 [2024-11-20 17:53:36.218306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.248 17:53:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:10.248 00:19:10.248 real 0m5.894s 00:19:10.248 user 0m8.765s 00:19:10.248 sys 0m1.148s 00:19:10.248 17:53:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.248 17:53:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.248 ************************************ 00:19:10.248 END TEST raid_superblock_test_md_interleaved 00:19:10.248 ************************************ 00:19:10.523 17:53:37 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:10.523 17:53:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:10.523 17:53:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.523 17:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.523 ************************************ 00:19:10.523 START TEST raid_rebuild_test_sb_md_interleaved 00:19:10.523 ************************************ 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89510 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89510 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89510 ']' 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.523 17:53:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.523 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:10.523 Zero copy mechanism will not be used. 00:19:10.523 [2024-11-20 17:53:37.560585] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:10.523 [2024-11-20 17:53:37.560704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89510 ] 00:19:10.804 [2024-11-20 17:53:37.734639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.804 [2024-11-20 17:53:37.868583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.080 [2024-11-20 17:53:38.105467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.080 [2024-11-20 17:53:38.105531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.340 BaseBdev1_malloc 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.340 [2024-11-20 17:53:38.414929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.340 [2024-11-20 17:53:38.414992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.340 [2024-11-20 17:53:38.415047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.340 [2024-11-20 17:53:38.415062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.340 [2024-11-20 17:53:38.417194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.340 [2024-11-20 17:53:38.417239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.340 BaseBdev1 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.340 BaseBdev2_malloc 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.340 [2024-11-20 17:53:38.476176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:11.340 [2024-11-20 17:53:38.476250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.340 [2024-11-20 17:53:38.476273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.340 [2024-11-20 17:53:38.476287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.340 [2024-11-20 17:53:38.478411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.340 [2024-11-20 17:53:38.478448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.340 BaseBdev2 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.340 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.601 spare_malloc 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.601 spare_delay 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.601 [2024-11-20 17:53:38.558362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.601 [2024-11-20 17:53:38.558436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.601 [2024-11-20 17:53:38.558458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.601 [2024-11-20 17:53:38.558470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.601 [2024-11-20 17:53:38.560600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.601 [2024-11-20 17:53:38.560639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.601 spare 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.601 [2024-11-20 17:53:38.570391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.601 [2024-11-20 17:53:38.572490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.601 [2024-11-20 17:53:38.572703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:11.601 [2024-11-20 17:53:38.572721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:11.601 [2024-11-20 17:53:38.572797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.601 [2024-11-20 17:53:38.572891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:11.601 [2024-11-20 17:53:38.572902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:11.601 [2024-11-20 17:53:38.572971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.601 "name": "raid_bdev1", 00:19:11.601 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:11.601 "strip_size_kb": 0, 00:19:11.601 "state": "online", 00:19:11.601 "raid_level": "raid1", 00:19:11.601 "superblock": true, 00:19:11.601 "num_base_bdevs": 2, 00:19:11.601 "num_base_bdevs_discovered": 2, 00:19:11.601 "num_base_bdevs_operational": 2, 00:19:11.601 "base_bdevs_list": [ 00:19:11.601 { 00:19:11.601 "name": "BaseBdev1", 00:19:11.601 "uuid": "99f41007-94c1-5ad5-8ba9-da79232028d0", 00:19:11.601 "is_configured": true, 00:19:11.601 "data_offset": 256, 00:19:11.601 "data_size": 7936 00:19:11.601 }, 00:19:11.601 { 00:19:11.601 "name": "BaseBdev2", 00:19:11.601 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:11.601 "is_configured": true, 00:19:11.601 "data_offset": 256, 00:19:11.601 "data_size": 7936 00:19:11.601 } 00:19:11.601 ] 00:19:11.601 }' 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.601 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.860 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.860 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.860 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.860 17:53:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:11.860 [2024-11-20 17:53:38.997863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.860 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.119 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:12.119 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:12.119 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.119 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.119 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.120 [2024-11-20 17:53:39.097419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.120 "name": "raid_bdev1", 00:19:12.120 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:12.120 "strip_size_kb": 0, 00:19:12.120 "state": "online", 00:19:12.120 "raid_level": "raid1", 00:19:12.120 "superblock": true, 00:19:12.120 "num_base_bdevs": 2, 00:19:12.120 "num_base_bdevs_discovered": 1, 00:19:12.120 "num_base_bdevs_operational": 1, 00:19:12.120 "base_bdevs_list": [ 00:19:12.120 { 00:19:12.120 "name": null, 00:19:12.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.120 "is_configured": false, 00:19:12.120 "data_offset": 0, 00:19:12.120 "data_size": 7936 00:19:12.120 }, 00:19:12.120 { 00:19:12.120 "name": "BaseBdev2", 00:19:12.120 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:12.120 "is_configured": true, 00:19:12.120 "data_offset": 256, 00:19:12.120 "data_size": 7936 00:19:12.120 } 00:19:12.120 ] 00:19:12.120 }' 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.120 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.689 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.689 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 [2024-11-20 17:53:39.580693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.689 [2024-11-20 17:53:39.598790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:12.689 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.689 17:53:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:12.689 [2024-11-20 17:53:39.600878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.628 "name": "raid_bdev1", 00:19:13.628 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:13.628 "strip_size_kb": 0, 00:19:13.628 "state": "online", 00:19:13.628 "raid_level": "raid1", 00:19:13.628 "superblock": true, 00:19:13.628 "num_base_bdevs": 2, 00:19:13.628 "num_base_bdevs_discovered": 2, 00:19:13.628 "num_base_bdevs_operational": 2, 00:19:13.628 "process": { 00:19:13.628 "type": "rebuild", 00:19:13.628 "target": "spare", 00:19:13.628 "progress": { 00:19:13.628 "blocks": 2560, 00:19:13.628 "percent": 32 00:19:13.628 } 00:19:13.628 }, 00:19:13.628 "base_bdevs_list": [ 00:19:13.628 { 00:19:13.628 "name": "spare", 00:19:13.628 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:13.628 "is_configured": true, 00:19:13.628 "data_offset": 256, 00:19:13.628 "data_size": 7936 00:19:13.628 }, 00:19:13.628 { 00:19:13.628 "name": "BaseBdev2", 00:19:13.628 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:13.628 "is_configured": true, 00:19:13.628 "data_offset": 256, 00:19:13.628 "data_size": 7936 00:19:13.628 } 00:19:13.628 ] 00:19:13.628 }' 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.628 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.628 [2024-11-20 17:53:40.761682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.888 [2024-11-20 17:53:40.809564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.888 [2024-11-20 17:53:40.809626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.888 [2024-11-20 17:53:40.809657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.888 [2024-11-20 17:53:40.809667] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.888 "name": "raid_bdev1", 00:19:13.888 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:13.888 "strip_size_kb": 0, 00:19:13.888 "state": "online", 00:19:13.888 "raid_level": "raid1", 00:19:13.888 "superblock": true, 00:19:13.888 "num_base_bdevs": 2, 00:19:13.888 "num_base_bdevs_discovered": 1, 00:19:13.888 "num_base_bdevs_operational": 1, 00:19:13.888 "base_bdevs_list": [ 00:19:13.888 { 00:19:13.888 "name": null, 00:19:13.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.888 "is_configured": false, 00:19:13.888 "data_offset": 0, 00:19:13.888 "data_size": 7936 00:19:13.888 }, 00:19:13.888 { 00:19:13.888 "name": "BaseBdev2", 00:19:13.888 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:13.888 "is_configured": true, 00:19:13.888 "data_offset": 256, 00:19:13.888 "data_size": 7936 00:19:13.888 } 00:19:13.888 ] 00:19:13.888 }' 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.888 17:53:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.148 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.408 "name": "raid_bdev1", 00:19:14.408 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:14.408 "strip_size_kb": 0, 00:19:14.408 "state": "online", 00:19:14.408 "raid_level": "raid1", 00:19:14.408 "superblock": true, 00:19:14.408 "num_base_bdevs": 2, 00:19:14.408 "num_base_bdevs_discovered": 1, 00:19:14.408 "num_base_bdevs_operational": 1, 00:19:14.408 "base_bdevs_list": [ 00:19:14.408 { 00:19:14.408 "name": null, 00:19:14.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.408 "is_configured": false, 00:19:14.408 "data_offset": 0, 00:19:14.408 "data_size": 7936 00:19:14.408 }, 00:19:14.408 { 00:19:14.408 "name": "BaseBdev2", 00:19:14.408 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:14.408 "is_configured": true, 00:19:14.408 "data_offset": 256, 00:19:14.408 "data_size": 7936 00:19:14.408 } 00:19:14.408 ] 00:19:14.408 }' 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.408 [2024-11-20 17:53:41.416677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.408 [2024-11-20 17:53:41.432912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.408 17:53:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:14.408 [2024-11-20 17:53:41.435005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.347 "name": "raid_bdev1", 00:19:15.347 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:15.347 "strip_size_kb": 0, 00:19:15.347 "state": "online", 00:19:15.347 "raid_level": "raid1", 00:19:15.347 "superblock": true, 00:19:15.347 "num_base_bdevs": 2, 00:19:15.347 "num_base_bdevs_discovered": 2, 00:19:15.347 "num_base_bdevs_operational": 2, 00:19:15.347 "process": { 00:19:15.347 "type": "rebuild", 00:19:15.347 "target": "spare", 00:19:15.347 "progress": { 00:19:15.347 "blocks": 2560, 00:19:15.347 "percent": 32 00:19:15.347 } 00:19:15.347 }, 00:19:15.347 "base_bdevs_list": [ 00:19:15.347 { 00:19:15.347 "name": "spare", 00:19:15.347 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:15.347 "is_configured": true, 00:19:15.347 "data_offset": 256, 00:19:15.347 "data_size": 7936 00:19:15.347 }, 00:19:15.347 { 00:19:15.347 "name": "BaseBdev2", 00:19:15.347 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:15.347 "is_configured": true, 00:19:15.347 "data_offset": 256, 00:19:15.347 "data_size": 7936 00:19:15.347 } 00:19:15.347 ] 00:19:15.347 }' 00:19:15.347 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:15.608 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=751 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.608 "name": "raid_bdev1", 00:19:15.608 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:15.608 "strip_size_kb": 0, 00:19:15.608 "state": "online", 00:19:15.608 "raid_level": "raid1", 00:19:15.608 "superblock": true, 00:19:15.608 "num_base_bdevs": 2, 00:19:15.608 "num_base_bdevs_discovered": 2, 00:19:15.608 "num_base_bdevs_operational": 2, 00:19:15.608 "process": { 00:19:15.608 "type": "rebuild", 00:19:15.608 "target": "spare", 00:19:15.608 "progress": { 00:19:15.608 "blocks": 2816, 00:19:15.608 "percent": 35 00:19:15.608 } 00:19:15.608 }, 00:19:15.608 "base_bdevs_list": [ 00:19:15.608 { 00:19:15.608 "name": "spare", 00:19:15.608 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:15.608 "is_configured": true, 00:19:15.608 "data_offset": 256, 00:19:15.608 "data_size": 7936 00:19:15.608 }, 00:19:15.608 { 00:19:15.608 "name": "BaseBdev2", 00:19:15.608 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:15.608 "is_configured": true, 00:19:15.608 "data_offset": 256, 00:19:15.608 "data_size": 7936 00:19:15.608 } 00:19:15.608 ] 00:19:15.608 }' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.608 17:53:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.548 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.809 "name": "raid_bdev1", 00:19:16.809 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:16.809 "strip_size_kb": 0, 00:19:16.809 "state": "online", 00:19:16.809 "raid_level": "raid1", 00:19:16.809 "superblock": true, 00:19:16.809 "num_base_bdevs": 2, 00:19:16.809 "num_base_bdevs_discovered": 2, 00:19:16.809 "num_base_bdevs_operational": 2, 00:19:16.809 "process": { 00:19:16.809 "type": "rebuild", 00:19:16.809 "target": "spare", 00:19:16.809 "progress": { 00:19:16.809 "blocks": 5632, 00:19:16.809 "percent": 70 00:19:16.809 } 00:19:16.809 }, 00:19:16.809 "base_bdevs_list": [ 00:19:16.809 { 00:19:16.809 "name": "spare", 00:19:16.809 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:16.809 "is_configured": true, 00:19:16.809 "data_offset": 256, 00:19:16.809 "data_size": 7936 00:19:16.809 }, 00:19:16.809 { 00:19:16.809 "name": "BaseBdev2", 00:19:16.809 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:16.809 "is_configured": true, 00:19:16.809 "data_offset": 256, 00:19:16.809 "data_size": 7936 00:19:16.809 } 00:19:16.809 ] 00:19:16.809 }' 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.809 17:53:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.749 [2024-11-20 17:53:44.556469] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.749 [2024-11-20 17:53:44.556542] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.749 [2024-11-20 17:53:44.556661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.749 "name": "raid_bdev1", 00:19:17.749 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:17.749 "strip_size_kb": 0, 00:19:17.749 "state": "online", 00:19:17.749 "raid_level": "raid1", 00:19:17.749 "superblock": true, 00:19:17.749 "num_base_bdevs": 2, 00:19:17.749 "num_base_bdevs_discovered": 2, 00:19:17.749 "num_base_bdevs_operational": 2, 00:19:17.749 "base_bdevs_list": [ 00:19:17.749 { 00:19:17.749 "name": "spare", 00:19:17.749 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:17.749 "is_configured": true, 00:19:17.749 "data_offset": 256, 00:19:17.749 "data_size": 7936 00:19:17.749 }, 00:19:17.749 { 00:19:17.749 "name": "BaseBdev2", 00:19:17.749 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:17.749 "is_configured": true, 00:19:17.749 "data_offset": 256, 00:19:17.749 "data_size": 7936 00:19:17.749 } 00:19:17.749 ] 00:19:17.749 }' 00:19:17.749 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.009 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:18.009 17:53:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.009 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:18.009 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:18.009 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.009 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.010 "name": "raid_bdev1", 00:19:18.010 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:18.010 "strip_size_kb": 0, 00:19:18.010 "state": "online", 00:19:18.010 "raid_level": "raid1", 00:19:18.010 "superblock": true, 00:19:18.010 "num_base_bdevs": 2, 00:19:18.010 "num_base_bdevs_discovered": 2, 00:19:18.010 "num_base_bdevs_operational": 2, 00:19:18.010 "base_bdevs_list": [ 00:19:18.010 { 00:19:18.010 "name": "spare", 00:19:18.010 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:18.010 "is_configured": true, 00:19:18.010 "data_offset": 256, 00:19:18.010 "data_size": 7936 00:19:18.010 }, 00:19:18.010 { 00:19:18.010 "name": "BaseBdev2", 00:19:18.010 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:18.010 "is_configured": true, 00:19:18.010 "data_offset": 256, 00:19:18.010 "data_size": 7936 00:19:18.010 } 00:19:18.010 ] 00:19:18.010 }' 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.010 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.270 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.270 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.270 "name": "raid_bdev1", 00:19:18.270 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:18.270 "strip_size_kb": 0, 00:19:18.270 "state": "online", 00:19:18.270 "raid_level": "raid1", 00:19:18.270 "superblock": true, 00:19:18.270 "num_base_bdevs": 2, 00:19:18.270 "num_base_bdevs_discovered": 2, 00:19:18.270 "num_base_bdevs_operational": 2, 00:19:18.270 "base_bdevs_list": [ 00:19:18.270 { 00:19:18.270 "name": "spare", 00:19:18.270 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:18.270 "is_configured": true, 00:19:18.270 "data_offset": 256, 00:19:18.270 "data_size": 7936 00:19:18.270 }, 00:19:18.270 { 00:19:18.270 "name": "BaseBdev2", 00:19:18.270 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:18.270 "is_configured": true, 00:19:18.270 "data_offset": 256, 00:19:18.270 "data_size": 7936 00:19:18.270 } 00:19:18.270 ] 00:19:18.270 }' 00:19:18.270 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.270 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 [2024-11-20 17:53:45.601502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.530 [2024-11-20 17:53:45.601536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.530 [2024-11-20 17:53:45.601629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.530 [2024-11-20 17:53:45.601693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.530 [2024-11-20 17:53:45.601705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.530 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 [2024-11-20 17:53:45.669378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.530 [2024-11-20 17:53:45.669431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.530 [2024-11-20 17:53:45.669474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:18.530 [2024-11-20 17:53:45.669483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.530 [2024-11-20 17:53:45.671718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.530 [2024-11-20 17:53:45.671754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.530 [2024-11-20 17:53:45.671804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.531 [2024-11-20 17:53:45.671854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.531 [2024-11-20 17:53:45.671978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.531 spare 00:19:18.531 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.531 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:18.531 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.531 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.790 [2024-11-20 17:53:45.771881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:18.790 [2024-11-20 17:53:45.771912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:18.790 [2024-11-20 17:53:45.772003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:18.790 [2024-11-20 17:53:45.772085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:18.790 [2024-11-20 17:53:45.772094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:18.790 [2024-11-20 17:53:45.772165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.790 "name": "raid_bdev1", 00:19:18.790 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:18.790 "strip_size_kb": 0, 00:19:18.790 "state": "online", 00:19:18.790 "raid_level": "raid1", 00:19:18.790 "superblock": true, 00:19:18.790 "num_base_bdevs": 2, 00:19:18.790 "num_base_bdevs_discovered": 2, 00:19:18.790 "num_base_bdevs_operational": 2, 00:19:18.790 "base_bdevs_list": [ 00:19:18.790 { 00:19:18.790 "name": "spare", 00:19:18.790 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:18.790 "is_configured": true, 00:19:18.790 "data_offset": 256, 00:19:18.790 "data_size": 7936 00:19:18.790 }, 00:19:18.790 { 00:19:18.790 "name": "BaseBdev2", 00:19:18.790 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:18.790 "is_configured": true, 00:19:18.790 "data_offset": 256, 00:19:18.790 "data_size": 7936 00:19:18.790 } 00:19:18.790 ] 00:19:18.790 }' 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.790 17:53:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.050 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.311 "name": "raid_bdev1", 00:19:19.311 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:19.311 "strip_size_kb": 0, 00:19:19.311 "state": "online", 00:19:19.311 "raid_level": "raid1", 00:19:19.311 "superblock": true, 00:19:19.311 "num_base_bdevs": 2, 00:19:19.311 "num_base_bdevs_discovered": 2, 00:19:19.311 "num_base_bdevs_operational": 2, 00:19:19.311 "base_bdevs_list": [ 00:19:19.311 { 00:19:19.311 "name": "spare", 00:19:19.311 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:19.311 "is_configured": true, 00:19:19.311 "data_offset": 256, 00:19:19.311 "data_size": 7936 00:19:19.311 }, 00:19:19.311 { 00:19:19.311 "name": "BaseBdev2", 00:19:19.311 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:19.311 "is_configured": true, 00:19:19.311 "data_offset": 256, 00:19:19.311 "data_size": 7936 00:19:19.311 } 00:19:19.311 ] 00:19:19.311 }' 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.311 [2024-11-20 17:53:46.376386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.311 "name": "raid_bdev1", 00:19:19.311 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:19.311 "strip_size_kb": 0, 00:19:19.311 "state": "online", 00:19:19.311 "raid_level": "raid1", 00:19:19.311 "superblock": true, 00:19:19.311 "num_base_bdevs": 2, 00:19:19.311 "num_base_bdevs_discovered": 1, 00:19:19.311 "num_base_bdevs_operational": 1, 00:19:19.311 "base_bdevs_list": [ 00:19:19.311 { 00:19:19.311 "name": null, 00:19:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.311 "is_configured": false, 00:19:19.311 "data_offset": 0, 00:19:19.311 "data_size": 7936 00:19:19.311 }, 00:19:19.311 { 00:19:19.311 "name": "BaseBdev2", 00:19:19.311 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:19.311 "is_configured": true, 00:19:19.311 "data_offset": 256, 00:19:19.311 "data_size": 7936 00:19:19.311 } 00:19:19.311 ] 00:19:19.311 }' 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.311 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.882 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.882 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.882 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.882 [2024-11-20 17:53:46.827581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.882 [2024-11-20 17:53:46.827711] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.882 [2024-11-20 17:53:46.827735] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.882 [2024-11-20 17:53:46.827766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.882 [2024-11-20 17:53:46.843704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:19.882 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.882 17:53:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:19.882 [2024-11-20 17:53:46.845816] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.823 "name": "raid_bdev1", 00:19:20.823 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:20.823 "strip_size_kb": 0, 00:19:20.823 "state": "online", 00:19:20.823 "raid_level": "raid1", 00:19:20.823 "superblock": true, 00:19:20.823 "num_base_bdevs": 2, 00:19:20.823 "num_base_bdevs_discovered": 2, 00:19:20.823 "num_base_bdevs_operational": 2, 00:19:20.823 "process": { 00:19:20.823 "type": "rebuild", 00:19:20.823 "target": "spare", 00:19:20.823 "progress": { 00:19:20.823 "blocks": 2560, 00:19:20.823 "percent": 32 00:19:20.823 } 00:19:20.823 }, 00:19:20.823 "base_bdevs_list": [ 00:19:20.823 { 00:19:20.823 "name": "spare", 00:19:20.823 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:20.823 "is_configured": true, 00:19:20.823 "data_offset": 256, 00:19:20.823 "data_size": 7936 00:19:20.823 }, 00:19:20.823 { 00:19:20.823 "name": "BaseBdev2", 00:19:20.823 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:20.823 "is_configured": true, 00:19:20.823 "data_offset": 256, 00:19:20.823 "data_size": 7936 00:19:20.823 } 00:19:20.823 ] 00:19:20.823 }' 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.823 17:53:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 [2024-11-20 17:53:47.965763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.084 [2024-11-20 17:53:48.054402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.084 [2024-11-20 17:53:48.054485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.084 [2024-11-20 17:53:48.054499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.084 [2024-11-20 17:53:48.054508] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.084 "name": "raid_bdev1", 00:19:21.084 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:21.084 "strip_size_kb": 0, 00:19:21.084 "state": "online", 00:19:21.084 "raid_level": "raid1", 00:19:21.084 "superblock": true, 00:19:21.084 "num_base_bdevs": 2, 00:19:21.084 "num_base_bdevs_discovered": 1, 00:19:21.084 "num_base_bdevs_operational": 1, 00:19:21.084 "base_bdevs_list": [ 00:19:21.084 { 00:19:21.084 "name": null, 00:19:21.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.084 "is_configured": false, 00:19:21.084 "data_offset": 0, 00:19:21.084 "data_size": 7936 00:19:21.084 }, 00:19:21.084 { 00:19:21.084 "name": "BaseBdev2", 00:19:21.084 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:21.084 "is_configured": true, 00:19:21.084 "data_offset": 256, 00:19:21.084 "data_size": 7936 00:19:21.084 } 00:19:21.084 ] 00:19:21.084 }' 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.084 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.655 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.655 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.655 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.655 [2024-11-20 17:53:48.537034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.655 [2024-11-20 17:53:48.537100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.655 [2024-11-20 17:53:48.537129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:21.655 [2024-11-20 17:53:48.537141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.655 [2024-11-20 17:53:48.537356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.655 [2024-11-20 17:53:48.537375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.655 [2024-11-20 17:53:48.537424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.655 [2024-11-20 17:53:48.537439] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.655 [2024-11-20 17:53:48.537450] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.655 [2024-11-20 17:53:48.537472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.655 [2024-11-20 17:53:48.552945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:21.655 spare 00:19:21.655 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.655 17:53:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:21.655 [2024-11-20 17:53:48.555057] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.593 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.593 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.593 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.594 "name": "raid_bdev1", 00:19:22.594 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:22.594 "strip_size_kb": 0, 00:19:22.594 "state": "online", 00:19:22.594 "raid_level": "raid1", 00:19:22.594 "superblock": true, 00:19:22.594 "num_base_bdevs": 2, 00:19:22.594 "num_base_bdevs_discovered": 2, 00:19:22.594 "num_base_bdevs_operational": 2, 00:19:22.594 "process": { 00:19:22.594 "type": "rebuild", 00:19:22.594 "target": "spare", 00:19:22.594 "progress": { 00:19:22.594 "blocks": 2560, 00:19:22.594 "percent": 32 00:19:22.594 } 00:19:22.594 }, 00:19:22.594 "base_bdevs_list": [ 00:19:22.594 { 00:19:22.594 "name": "spare", 00:19:22.594 "uuid": "55530151-bd94-5bc7-a1c6-6775934bad42", 00:19:22.594 "is_configured": true, 00:19:22.594 "data_offset": 256, 00:19:22.594 "data_size": 7936 00:19:22.594 }, 00:19:22.594 { 00:19:22.594 "name": "BaseBdev2", 00:19:22.594 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:22.594 "is_configured": true, 00:19:22.594 "data_offset": 256, 00:19:22.594 "data_size": 7936 00:19:22.594 } 00:19:22.594 ] 00:19:22.594 }' 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.594 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.594 [2024-11-20 17:53:49.706702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.594 [2024-11-20 17:53:49.763455] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.594 [2024-11-20 17:53:49.763523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.594 [2024-11-20 17:53:49.763541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.594 [2024-11-20 17:53:49.763547] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.854 "name": "raid_bdev1", 00:19:22.854 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:22.854 "strip_size_kb": 0, 00:19:22.854 "state": "online", 00:19:22.854 "raid_level": "raid1", 00:19:22.854 "superblock": true, 00:19:22.854 "num_base_bdevs": 2, 00:19:22.854 "num_base_bdevs_discovered": 1, 00:19:22.854 "num_base_bdevs_operational": 1, 00:19:22.854 "base_bdevs_list": [ 00:19:22.854 { 00:19:22.854 "name": null, 00:19:22.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.854 "is_configured": false, 00:19:22.854 "data_offset": 0, 00:19:22.854 "data_size": 7936 00:19:22.854 }, 00:19:22.854 { 00:19:22.854 "name": "BaseBdev2", 00:19:22.854 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:22.854 "is_configured": true, 00:19:22.854 "data_offset": 256, 00:19:22.854 "data_size": 7936 00:19:22.854 } 00:19:22.854 ] 00:19:22.854 }' 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.854 17:53:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.115 "name": "raid_bdev1", 00:19:23.115 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:23.115 "strip_size_kb": 0, 00:19:23.115 "state": "online", 00:19:23.115 "raid_level": "raid1", 00:19:23.115 "superblock": true, 00:19:23.115 "num_base_bdevs": 2, 00:19:23.115 "num_base_bdevs_discovered": 1, 00:19:23.115 "num_base_bdevs_operational": 1, 00:19:23.115 "base_bdevs_list": [ 00:19:23.115 { 00:19:23.115 "name": null, 00:19:23.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.115 "is_configured": false, 00:19:23.115 "data_offset": 0, 00:19:23.115 "data_size": 7936 00:19:23.115 }, 00:19:23.115 { 00:19:23.115 "name": "BaseBdev2", 00:19:23.115 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:23.115 "is_configured": true, 00:19:23.115 "data_offset": 256, 00:19:23.115 "data_size": 7936 00:19:23.115 } 00:19:23.115 ] 00:19:23.115 }' 00:19:23.115 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.376 [2024-11-20 17:53:50.357575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.376 [2024-11-20 17:53:50.357629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.376 [2024-11-20 17:53:50.357669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:23.376 [2024-11-20 17:53:50.357679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.376 [2024-11-20 17:53:50.357868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.376 [2024-11-20 17:53:50.357902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.376 [2024-11-20 17:53:50.357951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:23.376 [2024-11-20 17:53:50.357966] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.376 [2024-11-20 17:53:50.357977] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.376 [2024-11-20 17:53:50.357989] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:23.376 BaseBdev1 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.376 17:53:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.316 "name": "raid_bdev1", 00:19:24.316 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:24.316 "strip_size_kb": 0, 00:19:24.316 "state": "online", 00:19:24.316 "raid_level": "raid1", 00:19:24.316 "superblock": true, 00:19:24.316 "num_base_bdevs": 2, 00:19:24.316 "num_base_bdevs_discovered": 1, 00:19:24.316 "num_base_bdevs_operational": 1, 00:19:24.316 "base_bdevs_list": [ 00:19:24.316 { 00:19:24.316 "name": null, 00:19:24.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.316 "is_configured": false, 00:19:24.316 "data_offset": 0, 00:19:24.316 "data_size": 7936 00:19:24.316 }, 00:19:24.316 { 00:19:24.316 "name": "BaseBdev2", 00:19:24.316 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:24.316 "is_configured": true, 00:19:24.316 "data_offset": 256, 00:19:24.316 "data_size": 7936 00:19:24.316 } 00:19:24.316 ] 00:19:24.316 }' 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.316 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.886 "name": "raid_bdev1", 00:19:24.886 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:24.886 "strip_size_kb": 0, 00:19:24.886 "state": "online", 00:19:24.886 "raid_level": "raid1", 00:19:24.886 "superblock": true, 00:19:24.886 "num_base_bdevs": 2, 00:19:24.886 "num_base_bdevs_discovered": 1, 00:19:24.886 "num_base_bdevs_operational": 1, 00:19:24.886 "base_bdevs_list": [ 00:19:24.886 { 00:19:24.886 "name": null, 00:19:24.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.886 "is_configured": false, 00:19:24.886 "data_offset": 0, 00:19:24.886 "data_size": 7936 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "name": "BaseBdev2", 00:19:24.886 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 256, 00:19:24.886 "data_size": 7936 00:19:24.886 } 00:19:24.886 ] 00:19:24.886 }' 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.886 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.886 [2024-11-20 17:53:51.946938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.886 [2024-11-20 17:53:51.947067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.886 [2024-11-20 17:53:51.947086] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.886 request: 00:19:24.886 { 00:19:24.887 "base_bdev": "BaseBdev1", 00:19:24.887 "raid_bdev": "raid_bdev1", 00:19:24.887 "method": "bdev_raid_add_base_bdev", 00:19:24.887 "req_id": 1 00:19:24.887 } 00:19:24.887 Got JSON-RPC error response 00:19:24.887 response: 00:19:24.887 { 00:19:24.887 "code": -22, 00:19:24.887 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:24.887 } 00:19:24.887 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:24.887 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:24.887 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.887 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.887 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.887 17:53:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.827 17:53:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.087 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.087 "name": "raid_bdev1", 00:19:26.087 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:26.087 "strip_size_kb": 0, 00:19:26.087 "state": "online", 00:19:26.087 "raid_level": "raid1", 00:19:26.087 "superblock": true, 00:19:26.087 "num_base_bdevs": 2, 00:19:26.087 "num_base_bdevs_discovered": 1, 00:19:26.087 "num_base_bdevs_operational": 1, 00:19:26.087 "base_bdevs_list": [ 00:19:26.087 { 00:19:26.087 "name": null, 00:19:26.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.087 "is_configured": false, 00:19:26.087 "data_offset": 0, 00:19:26.087 "data_size": 7936 00:19:26.087 }, 00:19:26.087 { 00:19:26.087 "name": "BaseBdev2", 00:19:26.087 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:26.087 "is_configured": true, 00:19:26.087 "data_offset": 256, 00:19:26.087 "data_size": 7936 00:19:26.087 } 00:19:26.087 ] 00:19:26.087 }' 00:19:26.087 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.087 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.348 "name": "raid_bdev1", 00:19:26.348 "uuid": "5c8d5727-1b5a-48f0-9502-affcf3d4cc58", 00:19:26.348 "strip_size_kb": 0, 00:19:26.348 "state": "online", 00:19:26.348 "raid_level": "raid1", 00:19:26.348 "superblock": true, 00:19:26.348 "num_base_bdevs": 2, 00:19:26.348 "num_base_bdevs_discovered": 1, 00:19:26.348 "num_base_bdevs_operational": 1, 00:19:26.348 "base_bdevs_list": [ 00:19:26.348 { 00:19:26.348 "name": null, 00:19:26.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.348 "is_configured": false, 00:19:26.348 "data_offset": 0, 00:19:26.348 "data_size": 7936 00:19:26.348 }, 00:19:26.348 { 00:19:26.348 "name": "BaseBdev2", 00:19:26.348 "uuid": "e2673967-6991-5d9e-8a35-9290714931e2", 00:19:26.348 "is_configured": true, 00:19:26.348 "data_offset": 256, 00:19:26.348 "data_size": 7936 00:19:26.348 } 00:19:26.348 ] 00:19:26.348 }' 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89510 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89510 ']' 00:19:26.348 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89510 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89510 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.608 killing process with pid 89510 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89510' 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89510 00:19:26.608 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.608 00:19:26.608 Latency(us) 00:19:26.608 [2024-11-20T17:53:53.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.608 [2024-11-20T17:53:53.784Z] =================================================================================================================== 00:19:26.608 [2024-11-20T17:53:53.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.608 [2024-11-20 17:53:53.555508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.608 [2024-11-20 17:53:53.555605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.608 [2024-11-20 17:53:53.555646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.608 [2024-11-20 17:53:53.555659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:26.608 17:53:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89510 00:19:26.868 [2024-11-20 17:53:53.862260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.251 17:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:28.251 00:19:28.251 real 0m17.554s 00:19:28.251 user 0m22.866s 00:19:28.251 sys 0m1.726s 00:19:28.251 17:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.251 17:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.251 ************************************ 00:19:28.251 END TEST raid_rebuild_test_sb_md_interleaved 00:19:28.251 ************************************ 00:19:28.251 17:53:55 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:28.251 17:53:55 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:28.251 17:53:55 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89510 ']' 00:19:28.251 17:53:55 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89510 00:19:28.251 17:53:55 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:28.251 00:19:28.251 real 12m13.272s 00:19:28.251 user 16m19.360s 00:19:28.251 sys 2m0.531s 00:19:28.251 17:53:55 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.251 17:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.251 ************************************ 00:19:28.251 END TEST bdev_raid 00:19:28.251 ************************************ 00:19:28.251 17:53:55 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.251 17:53:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.251 17:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.251 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:19:28.251 ************************************ 00:19:28.251 START TEST spdkcli_raid 00:19:28.251 ************************************ 00:19:28.251 17:53:55 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.251 * Looking for test storage... 00:19:28.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.251 17:53:55 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.251 17:53:55 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.251 17:53:55 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.251 17:53:55 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.251 17:53:55 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.252 17:53:55 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:28.252 17:53:55 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.252 17:53:55 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.252 --rc genhtml_branch_coverage=1 00:19:28.252 --rc genhtml_function_coverage=1 00:19:28.252 --rc genhtml_legend=1 00:19:28.252 --rc geninfo_all_blocks=1 00:19:28.252 --rc geninfo_unexecuted_blocks=1 00:19:28.252 00:19:28.252 ' 00:19:28.252 17:53:55 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.252 --rc genhtml_branch_coverage=1 00:19:28.252 --rc genhtml_function_coverage=1 00:19:28.252 --rc genhtml_legend=1 00:19:28.252 --rc geninfo_all_blocks=1 00:19:28.252 --rc geninfo_unexecuted_blocks=1 00:19:28.252 00:19:28.252 ' 00:19:28.252 17:53:55 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.252 --rc genhtml_branch_coverage=1 00:19:28.252 --rc genhtml_function_coverage=1 00:19:28.252 --rc genhtml_legend=1 00:19:28.252 --rc geninfo_all_blocks=1 00:19:28.252 --rc geninfo_unexecuted_blocks=1 00:19:28.252 00:19:28.252 ' 00:19:28.252 17:53:55 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.252 --rc genhtml_branch_coverage=1 00:19:28.252 --rc genhtml_function_coverage=1 00:19:28.252 --rc genhtml_legend=1 00:19:28.252 --rc geninfo_all_blocks=1 00:19:28.252 --rc geninfo_unexecuted_blocks=1 00:19:28.252 00:19:28.252 ' 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:28.252 17:53:55 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.252 17:53:55 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90181 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:28.512 17:53:55 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90181 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90181 ']' 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.512 17:53:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.512 [2024-11-20 17:53:55.546894] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:28.512 [2024-11-20 17:53:55.547118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90181 ] 00:19:28.772 [2024-11-20 17:53:55.726698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.772 [2024-11-20 17:53:55.863272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.772 [2024-11-20 17:53:55.863312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.713 17:53:56 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.713 17:53:56 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:29.713 17:53:56 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:29.713 17:53:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.713 17:53:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.972 17:53:56 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:29.972 17:53:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.972 17:53:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.972 17:53:56 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:29.972 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:29.972 ' 00:19:31.400 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:31.400 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:31.400 17:53:58 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:31.401 17:53:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.401 17:53:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.661 17:53:58 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:31.661 17:53:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.661 17:53:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.661 17:53:58 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:31.661 ' 00:19:32.601 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:32.601 17:53:59 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:32.601 17:53:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.601 17:53:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.861 17:53:59 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:32.861 17:53:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.861 17:53:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.861 17:53:59 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:32.861 17:53:59 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:33.120 17:54:00 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:33.379 17:54:00 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:33.379 17:54:00 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:33.379 17:54:00 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.379 17:54:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 17:54:00 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:33.379 17:54:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.379 17:54:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 17:54:00 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:33.379 ' 00:19:34.319 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:34.319 17:54:01 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:34.319 17:54:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.319 17:54:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.579 17:54:01 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:34.579 17:54:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.579 17:54:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.579 17:54:01 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:34.579 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:34.579 ' 00:19:35.960 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:35.960 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:35.960 17:54:02 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:35.960 17:54:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.960 17:54:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.960 17:54:03 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90181 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90181 ']' 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90181 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90181 00:19:35.960 killing process with pid 90181 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90181' 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90181 00:19:35.960 17:54:03 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90181 00:19:38.500 17:54:05 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:38.500 17:54:05 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90181 ']' 00:19:38.500 17:54:05 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90181 00:19:38.500 17:54:05 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90181 ']' 00:19:38.500 17:54:05 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90181 00:19:38.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90181) - No such process 00:19:38.501 Process with pid 90181 is not found 00:19:38.501 17:54:05 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90181 is not found' 00:19:38.501 17:54:05 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:38.501 17:54:05 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:38.501 17:54:05 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:38.501 17:54:05 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:38.501 00:19:38.501 real 0m10.415s 00:19:38.501 user 0m21.158s 00:19:38.501 sys 0m1.338s 00:19:38.501 17:54:05 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.501 17:54:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.501 ************************************ 00:19:38.501 END TEST spdkcli_raid 00:19:38.501 ************************************ 00:19:38.501 17:54:05 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:38.501 17:54:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.501 17:54:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.501 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:19:38.501 ************************************ 00:19:38.501 START TEST blockdev_raid5f 00:19:38.501 ************************************ 00:19:38.501 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:38.761 * Looking for test storage... 00:19:38.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:38.761 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:38.761 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:38.761 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:38.761 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.761 17:54:05 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:38.761 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.761 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:38.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.761 --rc genhtml_branch_coverage=1 00:19:38.761 --rc genhtml_function_coverage=1 00:19:38.761 --rc genhtml_legend=1 00:19:38.762 --rc geninfo_all_blocks=1 00:19:38.762 --rc geninfo_unexecuted_blocks=1 00:19:38.762 00:19:38.762 ' 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.762 --rc genhtml_branch_coverage=1 00:19:38.762 --rc genhtml_function_coverage=1 00:19:38.762 --rc genhtml_legend=1 00:19:38.762 --rc geninfo_all_blocks=1 00:19:38.762 --rc geninfo_unexecuted_blocks=1 00:19:38.762 00:19:38.762 ' 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.762 --rc genhtml_branch_coverage=1 00:19:38.762 --rc genhtml_function_coverage=1 00:19:38.762 --rc genhtml_legend=1 00:19:38.762 --rc geninfo_all_blocks=1 00:19:38.762 --rc geninfo_unexecuted_blocks=1 00:19:38.762 00:19:38.762 ' 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.762 --rc genhtml_branch_coverage=1 00:19:38.762 --rc genhtml_function_coverage=1 00:19:38.762 --rc genhtml_legend=1 00:19:38.762 --rc geninfo_all_blocks=1 00:19:38.762 --rc geninfo_unexecuted_blocks=1 00:19:38.762 00:19:38.762 ' 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90466 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:38.762 17:54:05 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90466 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90466 ']' 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.762 17:54:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.022 [2024-11-20 17:54:06.022903] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:39.022 [2024-11-20 17:54:06.023097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90466 ] 00:19:39.282 [2024-11-20 17:54:06.199318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.283 [2024-11-20 17:54:06.329470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.223 17:54:07 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.223 17:54:07 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:40.223 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:40.223 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:40.223 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:40.223 17:54:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.223 17:54:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.223 Malloc0 00:19:40.483 Malloc1 00:19:40.483 Malloc2 00:19:40.483 17:54:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.483 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:40.483 17:54:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b4d70798-410e-444a-b523-6721c49ed760"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b4d70798-410e-444a-b523-6721c49ed760",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b4d70798-410e-444a-b523-6721c49ed760",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "72d4d89d-c9c4-4ba9-9e85-281c6b62ebb4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "1eb45393-fdc1-494b-8503-007ce8785c67",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7c41c289-98ac-462e-aab2-928b105bc182",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:40.484 17:54:07 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90466 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90466 ']' 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90466 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.484 17:54:07 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90466 00:19:40.744 killing process with pid 90466 00:19:40.744 17:54:07 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.744 17:54:07 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.744 17:54:07 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90466' 00:19:40.744 17:54:07 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90466 00:19:40.744 17:54:07 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90466 00:19:43.292 17:54:10 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:43.292 17:54:10 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:43.292 17:54:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:43.292 17:54:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.292 17:54:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.292 ************************************ 00:19:43.292 START TEST bdev_hello_world 00:19:43.292 ************************************ 00:19:43.292 17:54:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:43.552 [2024-11-20 17:54:10.538130] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:43.552 [2024-11-20 17:54:10.538285] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90538 ] 00:19:43.552 [2024-11-20 17:54:10.712415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.812 [2024-11-20 17:54:10.845208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.383 [2024-11-20 17:54:11.460184] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:44.383 [2024-11-20 17:54:11.460234] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:44.383 [2024-11-20 17:54:11.460251] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:44.383 [2024-11-20 17:54:11.460708] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:44.383 [2024-11-20 17:54:11.460877] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:44.383 [2024-11-20 17:54:11.460895] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:44.383 [2024-11-20 17:54:11.460939] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:44.383 00:19:44.383 [2024-11-20 17:54:11.460956] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:45.767 00:19:45.767 real 0m2.465s 00:19:45.767 user 0m1.993s 00:19:45.767 sys 0m0.349s 00:19:45.767 17:54:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.767 ************************************ 00:19:45.767 END TEST bdev_hello_world 00:19:45.767 ************************************ 00:19:45.767 17:54:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:46.027 17:54:12 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:46.027 17:54:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:46.027 17:54:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.027 17:54:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.027 ************************************ 00:19:46.027 START TEST bdev_bounds 00:19:46.027 ************************************ 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90581 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90581' 00:19:46.027 Process bdevio pid: 90581 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90581 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90581 ']' 00:19:46.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.027 17:54:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:46.028 [2024-11-20 17:54:13.076231] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:46.028 [2024-11-20 17:54:13.076336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90581 ] 00:19:46.287 [2024-11-20 17:54:13.246305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.287 [2024-11-20 17:54:13.381492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.287 [2024-11-20 17:54:13.381699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.287 [2024-11-20 17:54:13.381739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.857 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.857 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:46.857 17:54:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:47.118 I/O targets: 00:19:47.118 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:47.118 00:19:47.118 00:19:47.118 CUnit - A unit testing framework for C - Version 2.1-3 00:19:47.118 http://cunit.sourceforge.net/ 00:19:47.118 00:19:47.118 00:19:47.118 Suite: bdevio tests on: raid5f 00:19:47.118 Test: blockdev write read block ...passed 00:19:47.118 Test: blockdev write zeroes read block ...passed 00:19:47.118 Test: blockdev write zeroes read no split ...passed 00:19:47.118 Test: blockdev write zeroes read split ...passed 00:19:47.378 Test: blockdev write zeroes read split partial ...passed 00:19:47.378 Test: blockdev reset ...passed 00:19:47.378 Test: blockdev write read 8 blocks ...passed 00:19:47.378 Test: blockdev write read size > 128k ...passed 00:19:47.378 Test: blockdev write read invalid size ...passed 00:19:47.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:47.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:47.378 Test: blockdev write read max offset ...passed 00:19:47.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:47.378 Test: blockdev writev readv 8 blocks ...passed 00:19:47.378 Test: blockdev writev readv 30 x 1block ...passed 00:19:47.378 Test: blockdev writev readv block ...passed 00:19:47.378 Test: blockdev writev readv size > 128k ...passed 00:19:47.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:47.378 Test: blockdev comparev and writev ...passed 00:19:47.378 Test: blockdev nvme passthru rw ...passed 00:19:47.378 Test: blockdev nvme passthru vendor specific ...passed 00:19:47.378 Test: blockdev nvme admin passthru ...passed 00:19:47.378 Test: blockdev copy ...passed 00:19:47.378 00:19:47.378 Run Summary: Type Total Ran Passed Failed Inactive 00:19:47.378 suites 1 1 n/a 0 0 00:19:47.378 tests 23 23 23 0 0 00:19:47.378 asserts 130 130 130 0 n/a 00:19:47.378 00:19:47.378 Elapsed time = 0.612 seconds 00:19:47.378 0 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90581 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90581 ']' 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90581 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90581 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90581' 00:19:47.378 killing process with pid 90581 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90581 00:19:47.378 17:54:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90581 00:19:48.760 ************************************ 00:19:48.760 END TEST bdev_bounds 00:19:48.760 ************************************ 00:19:48.760 17:54:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:48.760 00:19:48.760 real 0m2.880s 00:19:48.760 user 0m7.076s 00:19:48.760 sys 0m0.463s 00:19:48.760 17:54:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.760 17:54:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:48.761 17:54:15 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:48.761 17:54:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:48.761 17:54:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.761 17:54:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.020 ************************************ 00:19:49.020 START TEST bdev_nbd 00:19:49.020 ************************************ 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:49.020 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90641 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90641 /var/tmp/spdk-nbd.sock 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90641 ']' 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:49.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.021 17:54:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:49.021 [2024-11-20 17:54:16.056172] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:49.021 [2024-11-20 17:54:16.056388] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.281 [2024-11-20 17:54:16.234844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.281 [2024-11-20 17:54:16.364349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.220 1+0 records in 00:19:50.220 1+0 records out 00:19:50.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057124 s, 7.2 MB/s 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:50.220 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:50.480 { 00:19:50.480 "nbd_device": "/dev/nbd0", 00:19:50.480 "bdev_name": "raid5f" 00:19:50.480 } 00:19:50.480 ]' 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:50.480 { 00:19:50.480 "nbd_device": "/dev/nbd0", 00:19:50.480 "bdev_name": "raid5f" 00:19:50.480 } 00:19:50.480 ]' 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.480 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.740 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.000 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:51.000 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:51.001 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.001 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:51.001 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:51.001 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.001 17:54:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:51.001 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:51.260 /dev/nbd0 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.260 1+0 records in 00:19:51.260 1+0 records out 00:19:51.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317006 s, 12.9 MB/s 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.260 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:51.261 17:54:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:51.261 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.261 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:51.261 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:51.261 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.261 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:51.520 { 00:19:51.520 "nbd_device": "/dev/nbd0", 00:19:51.520 "bdev_name": "raid5f" 00:19:51.520 } 00:19:51.520 ]' 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:51.520 { 00:19:51.520 "nbd_device": "/dev/nbd0", 00:19:51.520 "bdev_name": "raid5f" 00:19:51.520 } 00:19:51.520 ]' 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:51.520 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:51.521 256+0 records in 00:19:51.521 256+0 records out 00:19:51.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125518 s, 83.5 MB/s 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:51.521 256+0 records in 00:19:51.521 256+0 records out 00:19:51.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277189 s, 37.8 MB/s 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.521 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.781 17:54:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:52.040 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:52.040 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:52.040 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:52.040 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:52.040 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:52.041 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:52.299 malloc_lvol_verify 00:19:52.299 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:52.559 5bccf0aa-c371-4d45-8a7f-14312d7262ed 00:19:52.559 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:52.559 86b4c45a-1c4f-4858-8958-4f7c57cbe2ca 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:52.820 /dev/nbd0 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:52.820 mke2fs 1.47.0 (5-Feb-2023) 00:19:52.820 Discarding device blocks: 0/4096 done 00:19:52.820 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:52.820 00:19:52.820 Allocating group tables: 0/1 done 00:19:52.820 Writing inode tables: 0/1 done 00:19:52.820 Creating journal (1024 blocks): done 00:19:52.820 Writing superblocks and filesystem accounting information: 0/1 done 00:19:52.820 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.820 17:54:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90641 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90641 ']' 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90641 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90641 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90641' 00:19:53.081 killing process with pid 90641 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90641 00:19:53.081 17:54:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90641 00:19:55.023 17:54:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:55.023 00:19:55.023 real 0m5.797s 00:19:55.023 user 0m7.632s 00:19:55.023 sys 0m1.381s 00:19:55.023 ************************************ 00:19:55.023 END TEST bdev_nbd 00:19:55.023 ************************************ 00:19:55.023 17:54:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.023 17:54:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:55.023 17:54:21 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:55.023 17:54:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:55.023 17:54:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:55.023 17:54:21 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:55.023 17:54:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:55.023 17:54:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.023 17:54:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:55.023 ************************************ 00:19:55.023 START TEST bdev_fio 00:19:55.023 ************************************ 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:55.023 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:55.023 ************************************ 00:19:55.023 START TEST bdev_fio_rw_verify 00:19:55.023 ************************************ 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.023 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.024 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.024 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.024 17:54:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.024 17:54:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.024 17:54:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.024 17:54:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:55.024 17:54:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.024 17:54:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:55.284 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:55.284 fio-3.35 00:19:55.284 Starting 1 thread 00:20:07.507 00:20:07.507 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90848: Wed Nov 20 17:54:33 2024 00:20:07.507 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:20:07.507 slat (nsec): min=17147, max=60412, avg=19061.41, stdev=2003.86 00:20:07.507 clat (usec): min=11, max=313, avg=130.44, stdev=45.25 00:20:07.507 lat (usec): min=30, max=337, avg=149.50, stdev=45.52 00:20:07.507 clat percentiles (usec): 00:20:07.507 | 50.000th=[ 133], 99.000th=[ 217], 99.900th=[ 241], 99.990th=[ 273], 00:20:07.507 | 99.999th=[ 306] 00:20:07.507 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(501MiB/9876msec); 0 zone resets 00:20:07.507 slat (usec): min=7, max=275, avg=16.22, stdev= 3.59 00:20:07.507 clat (usec): min=59, max=1684, avg=298.21, stdev=40.95 00:20:07.507 lat (usec): min=73, max=1960, avg=314.43, stdev=41.99 00:20:07.507 clat percentiles (usec): 00:20:07.507 | 50.000th=[ 302], 99.000th=[ 379], 99.900th=[ 578], 99.990th=[ 979], 00:20:07.507 | 99.999th=[ 1549] 00:20:07.507 bw ( KiB/s): min=48944, max=53736, per=98.80%, avg=51294.74, stdev=1357.83, samples=19 00:20:07.507 iops : min=12236, max=13434, avg=12823.68, stdev=339.46, samples=19 00:20:07.507 lat (usec) : 20=0.01%, 50=0.01%, 100=15.70%, 250=39.21%, 500=45.02% 00:20:07.507 lat (usec) : 750=0.04%, 1000=0.02% 00:20:07.507 lat (msec) : 2=0.01% 00:20:07.507 cpu : usr=98.89%, sys=0.41%, ctx=34, majf=0, minf=10136 00:20:07.507 IO depths : 1=7.6%, 2=19.8%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.507 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.507 issued rwts: total=123403,128187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:07.507 00:20:07.507 Run status group 0 (all jobs): 00:20:07.507 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:20:07.507 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=501MiB (525MB), run=9876-9876msec 00:20:07.767 ----------------------------------------------------- 00:20:07.767 Suppressions used: 00:20:07.767 count bytes template 00:20:07.767 1 7 /usr/src/fio/parse.c 00:20:07.767 943 90528 /usr/src/fio/iolog.c 00:20:07.767 1 8 libtcmalloc_minimal.so 00:20:07.767 1 904 libcrypto.so 00:20:07.767 ----------------------------------------------------- 00:20:07.767 00:20:07.767 00:20:07.767 real 0m12.937s 00:20:07.767 user 0m13.017s 00:20:07.767 sys 0m0.699s 00:20:07.767 17:54:34 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.767 ************************************ 00:20:07.767 END TEST bdev_fio_rw_verify 00:20:07.767 ************************************ 00:20:07.767 17:54:34 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b4d70798-410e-444a-b523-6721c49ed760"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b4d70798-410e-444a-b523-6721c49ed760",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b4d70798-410e-444a-b523-6721c49ed760",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "72d4d89d-c9c4-4ba9-9e85-281c6b62ebb4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "1eb45393-fdc1-494b-8503-007ce8785c67",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7c41c289-98ac-462e-aab2-928b105bc182",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:08.028 17:54:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:08.028 /home/vagrant/spdk_repo/spdk 00:20:08.028 ************************************ 00:20:08.028 END TEST bdev_fio 00:20:08.028 ************************************ 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:08.028 00:20:08.028 real 0m13.237s 00:20:08.028 user 0m13.154s 00:20:08.028 sys 0m0.834s 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.028 17:54:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:08.028 17:54:35 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:08.028 17:54:35 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:08.028 17:54:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:08.028 17:54:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.028 17:54:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:08.028 ************************************ 00:20:08.028 START TEST bdev_verify 00:20:08.028 ************************************ 00:20:08.028 17:54:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:08.288 [2024-11-20 17:54:35.218803] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:20:08.289 [2024-11-20 17:54:35.218913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91013 ] 00:20:08.289 [2024-11-20 17:54:35.393837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:08.547 [2024-11-20 17:54:35.528094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.547 [2024-11-20 17:54:35.528126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.115 Running I/O for 5 seconds... 00:20:10.987 10602.00 IOPS, 41.41 MiB/s [2024-11-20T17:54:39.538Z] 10563.50 IOPS, 41.26 MiB/s [2024-11-20T17:54:40.474Z] 10618.00 IOPS, 41.48 MiB/s [2024-11-20T17:54:41.411Z] 10621.00 IOPS, 41.49 MiB/s [2024-11-20T17:54:41.411Z] 10619.60 IOPS, 41.48 MiB/s 00:20:14.235 Latency(us) 00:20:14.235 [2024-11-20T17:54:41.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.235 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:14.235 Verification LBA range: start 0x0 length 0x2000 00:20:14.235 raid5f : 5.02 6466.43 25.26 0.00 0.00 29850.56 199.43 21635.47 00:20:14.235 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.235 Verification LBA range: start 0x2000 length 0x2000 00:20:14.235 raid5f : 5.01 4149.69 16.21 0.00 0.00 46498.20 282.61 33426.22 00:20:14.235 [2024-11-20T17:54:41.411Z] =================================================================================================================== 00:20:14.235 [2024-11-20T17:54:41.411Z] Total : 10616.12 41.47 0.00 0.00 36354.99 199.43 33426.22 00:20:15.614 00:20:15.614 real 0m7.481s 00:20:15.614 user 0m13.748s 00:20:15.614 sys 0m0.365s 00:20:15.614 17:54:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.614 17:54:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:15.614 ************************************ 00:20:15.614 END TEST bdev_verify 00:20:15.614 ************************************ 00:20:15.615 17:54:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:15.615 17:54:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:15.615 17:54:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.615 17:54:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:15.615 ************************************ 00:20:15.615 START TEST bdev_verify_big_io 00:20:15.615 ************************************ 00:20:15.615 17:54:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:15.615 [2024-11-20 17:54:42.770954] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:20:15.615 [2024-11-20 17:54:42.771136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91110 ] 00:20:15.874 [2024-11-20 17:54:42.944853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:16.133 [2024-11-20 17:54:43.076953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.133 [2024-11-20 17:54:43.076983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.701 Running I/O for 5 seconds... 00:20:18.572 633.00 IOPS, 39.56 MiB/s [2024-11-20T17:54:47.122Z] 760.00 IOPS, 47.50 MiB/s [2024-11-20T17:54:48.057Z] 739.67 IOPS, 46.23 MiB/s [2024-11-20T17:54:48.992Z] 761.00 IOPS, 47.56 MiB/s [2024-11-20T17:54:48.992Z] 761.60 IOPS, 47.60 MiB/s 00:20:21.816 Latency(us) 00:20:21.816 [2024-11-20T17:54:48.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.816 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:21.816 Verification LBA range: start 0x0 length 0x200 00:20:21.816 raid5f : 5.18 441.48 27.59 0.00 0.00 7290371.47 262.93 315030.69 00:20:21.816 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:21.816 Verification LBA range: start 0x200 length 0x200 00:20:21.816 raid5f : 5.26 337.94 21.12 0.00 0.00 9405675.13 207.48 404777.81 00:20:21.816 [2024-11-20T17:54:48.992Z] =================================================================================================================== 00:20:21.816 [2024-11-20T17:54:48.992Z] Total : 779.42 48.71 0.00 0.00 8215816.82 207.48 404777.81 00:20:23.722 00:20:23.722 real 0m7.742s 00:20:23.722 user 0m14.290s 00:20:23.722 sys 0m0.349s 00:20:23.722 17:54:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.722 17:54:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:23.722 ************************************ 00:20:23.722 END TEST bdev_verify_big_io 00:20:23.722 ************************************ 00:20:23.722 17:54:50 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:23.722 17:54:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:23.722 17:54:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.722 17:54:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.722 ************************************ 00:20:23.722 START TEST bdev_write_zeroes 00:20:23.722 ************************************ 00:20:23.722 17:54:50 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:23.722 [2024-11-20 17:54:50.592791] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:20:23.722 [2024-11-20 17:54:50.592999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91210 ] 00:20:23.722 [2024-11-20 17:54:50.765539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.722 [2024-11-20 17:54:50.895277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.674 Running I/O for 1 seconds... 00:20:25.629 29871.00 IOPS, 116.68 MiB/s 00:20:25.629 Latency(us) 00:20:25.629 [2024-11-20T17:54:52.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.629 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:25.629 raid5f : 1.01 29850.52 116.60 0.00 0.00 4275.39 1438.07 5924.00 00:20:25.629 [2024-11-20T17:54:52.805Z] =================================================================================================================== 00:20:25.629 [2024-11-20T17:54:52.805Z] Total : 29850.52 116.60 0.00 0.00 4275.39 1438.07 5924.00 00:20:27.011 ************************************ 00:20:27.011 END TEST bdev_write_zeroes 00:20:27.011 ************************************ 00:20:27.011 00:20:27.011 real 0m3.464s 00:20:27.011 user 0m2.990s 00:20:27.011 sys 0m0.345s 00:20:27.011 17:54:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.011 17:54:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:27.011 17:54:54 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.011 17:54:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:27.011 17:54:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.011 17:54:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.011 ************************************ 00:20:27.011 START TEST bdev_json_nonenclosed 00:20:27.011 ************************************ 00:20:27.011 17:54:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.011 [2024-11-20 17:54:54.138356] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:20:27.011 [2024-11-20 17:54:54.138477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91263 ] 00:20:27.272 [2024-11-20 17:54:54.312995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.272 [2024-11-20 17:54:54.441362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.272 [2024-11-20 17:54:54.441465] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:27.272 [2024-11-20 17:54:54.441494] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:27.272 [2024-11-20 17:54:54.441505] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:27.532 00:20:27.532 real 0m0.647s 00:20:27.532 user 0m0.396s 00:20:27.532 sys 0m0.147s 00:20:27.532 ************************************ 00:20:27.532 END TEST bdev_json_nonenclosed 00:20:27.532 ************************************ 00:20:27.532 17:54:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.532 17:54:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:27.793 17:54:54 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.793 17:54:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:27.793 17:54:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.793 17:54:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.793 ************************************ 00:20:27.793 START TEST bdev_json_nonarray 00:20:27.793 ************************************ 00:20:27.793 17:54:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.793 [2024-11-20 17:54:54.870144] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:20:27.793 [2024-11-20 17:54:54.870260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91294 ] 00:20:28.053 [2024-11-20 17:54:55.048724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.053 [2024-11-20 17:54:55.182208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.053 [2024-11-20 17:54:55.182325] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:28.053 [2024-11-20 17:54:55.182343] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:28.053 [2024-11-20 17:54:55.182364] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:28.313 00:20:28.313 real 0m0.671s 00:20:28.313 user 0m0.414s 00:20:28.313 sys 0m0.152s 00:20:28.313 17:54:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.313 17:54:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:28.313 ************************************ 00:20:28.313 END TEST bdev_json_nonarray 00:20:28.313 ************************************ 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:28.574 17:54:55 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:28.574 00:20:28.574 real 0m49.852s 00:20:28.574 user 1m6.310s 00:20:28.574 sys 0m5.704s 00:20:28.574 17:54:55 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.574 ************************************ 00:20:28.574 END TEST blockdev_raid5f 00:20:28.574 ************************************ 00:20:28.574 17:54:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:28.574 17:54:55 -- spdk/autotest.sh@194 -- # uname -s 00:20:28.574 17:54:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:28.574 17:54:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.574 17:54:55 -- common/autotest_common.sh@10 -- # set +x 00:20:28.574 17:54:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:28.574 17:54:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:28.574 17:54:55 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:28.574 17:54:55 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:28.574 17:54:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.574 17:54:55 -- common/autotest_common.sh@10 -- # set +x 00:20:28.574 17:54:55 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:28.574 17:54:55 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:28.574 17:54:55 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:28.574 17:54:55 -- common/autotest_common.sh@10 -- # set +x 00:20:31.118 INFO: APP EXITING 00:20:31.118 INFO: killing all VMs 00:20:31.118 INFO: killing vhost app 00:20:31.118 INFO: EXIT DONE 00:20:31.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.688 Waiting for block devices as requested 00:20:31.688 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.688 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:32.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:32.626 Cleaning 00:20:32.626 Removing: /var/run/dpdk/spdk0/config 00:20:32.626 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:32.626 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:32.626 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:32.626 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:32.626 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:32.886 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:32.886 Removing: /dev/shm/spdk_tgt_trace.pid57103 00:20:32.886 Removing: /var/run/dpdk/spdk0 00:20:32.886 Removing: /var/run/dpdk/spdk_pid56851 00:20:32.886 Removing: /var/run/dpdk/spdk_pid57103 00:20:32.886 Removing: /var/run/dpdk/spdk_pid57343 00:20:32.886 Removing: /var/run/dpdk/spdk_pid57458 00:20:32.886 Removing: /var/run/dpdk/spdk_pid57525 00:20:32.886 Removing: /var/run/dpdk/spdk_pid57664 00:20:32.887 Removing: /var/run/dpdk/spdk_pid57688 00:20:32.887 Removing: /var/run/dpdk/spdk_pid57909 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58035 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58148 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58281 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58400 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58434 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58476 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58552 00:20:32.887 Removing: /var/run/dpdk/spdk_pid58679 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59138 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59214 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59300 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59316 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59481 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59502 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59667 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59690 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59758 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59776 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59851 00:20:32.887 Removing: /var/run/dpdk/spdk_pid59869 00:20:32.887 Removing: /var/run/dpdk/spdk_pid60075 00:20:32.887 Removing: /var/run/dpdk/spdk_pid60112 00:20:32.887 Removing: /var/run/dpdk/spdk_pid60201 00:20:32.887 Removing: /var/run/dpdk/spdk_pid61583 00:20:32.887 Removing: /var/run/dpdk/spdk_pid61794 00:20:32.887 Removing: /var/run/dpdk/spdk_pid61940 00:20:32.887 Removing: /var/run/dpdk/spdk_pid62589 00:20:32.887 Removing: /var/run/dpdk/spdk_pid62799 00:20:32.887 Removing: /var/run/dpdk/spdk_pid62946 00:20:32.887 Removing: /var/run/dpdk/spdk_pid63589 00:20:32.887 Removing: /var/run/dpdk/spdk_pid63914 00:20:32.887 Removing: /var/run/dpdk/spdk_pid64064 00:20:32.887 Removing: /var/run/dpdk/spdk_pid65449 00:20:32.887 Removing: /var/run/dpdk/spdk_pid65703 00:20:32.887 Removing: /var/run/dpdk/spdk_pid65849 00:20:32.887 Removing: /var/run/dpdk/spdk_pid67245 00:20:32.887 Removing: /var/run/dpdk/spdk_pid67504 00:20:32.887 Removing: /var/run/dpdk/spdk_pid67644 00:20:32.887 Removing: /var/run/dpdk/spdk_pid69046 00:20:32.887 Removing: /var/run/dpdk/spdk_pid69497 00:20:32.887 Removing: /var/run/dpdk/spdk_pid69647 00:20:33.147 Removing: /var/run/dpdk/spdk_pid71151 00:20:33.147 Removing: /var/run/dpdk/spdk_pid71410 00:20:33.147 Removing: /var/run/dpdk/spdk_pid71561 00:20:33.147 Removing: /var/run/dpdk/spdk_pid73065 00:20:33.147 Removing: /var/run/dpdk/spdk_pid73324 00:20:33.147 Removing: /var/run/dpdk/spdk_pid73474 00:20:33.147 Removing: /var/run/dpdk/spdk_pid74972 00:20:33.147 Removing: /var/run/dpdk/spdk_pid75460 00:20:33.147 Removing: /var/run/dpdk/spdk_pid75612 00:20:33.147 Removing: /var/run/dpdk/spdk_pid75756 00:20:33.147 Removing: /var/run/dpdk/spdk_pid76186 00:20:33.147 Removing: /var/run/dpdk/spdk_pid76917 00:20:33.147 Removing: /var/run/dpdk/spdk_pid77295 00:20:33.147 Removing: /var/run/dpdk/spdk_pid77988 00:20:33.147 Removing: /var/run/dpdk/spdk_pid78431 00:20:33.147 Removing: /var/run/dpdk/spdk_pid79200 00:20:33.147 Removing: /var/run/dpdk/spdk_pid79609 00:20:33.147 Removing: /var/run/dpdk/spdk_pid81609 00:20:33.147 Removing: /var/run/dpdk/spdk_pid82054 00:20:33.147 Removing: /var/run/dpdk/spdk_pid82494 00:20:33.147 Removing: /var/run/dpdk/spdk_pid84579 00:20:33.147 Removing: /var/run/dpdk/spdk_pid85070 00:20:33.147 Removing: /var/run/dpdk/spdk_pid85586 00:20:33.147 Removing: /var/run/dpdk/spdk_pid86649 00:20:33.147 Removing: /var/run/dpdk/spdk_pid86976 00:20:33.147 Removing: /var/run/dpdk/spdk_pid87914 00:20:33.147 Removing: /var/run/dpdk/spdk_pid88237 00:20:33.147 Removing: /var/run/dpdk/spdk_pid89182 00:20:33.147 Removing: /var/run/dpdk/spdk_pid89510 00:20:33.147 Removing: /var/run/dpdk/spdk_pid90181 00:20:33.147 Removing: /var/run/dpdk/spdk_pid90466 00:20:33.147 Removing: /var/run/dpdk/spdk_pid90538 00:20:33.147 Removing: /var/run/dpdk/spdk_pid90581 00:20:33.147 Removing: /var/run/dpdk/spdk_pid90834 00:20:33.147 Removing: /var/run/dpdk/spdk_pid91013 00:20:33.147 Removing: /var/run/dpdk/spdk_pid91110 00:20:33.147 Removing: /var/run/dpdk/spdk_pid91210 00:20:33.147 Removing: /var/run/dpdk/spdk_pid91263 00:20:33.147 Removing: /var/run/dpdk/spdk_pid91294 00:20:33.147 Clean 00:20:33.147 17:55:00 -- common/autotest_common.sh@1453 -- # return 0 00:20:33.147 17:55:00 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:33.147 17:55:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.147 17:55:00 -- common/autotest_common.sh@10 -- # set +x 00:20:33.407 17:55:00 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:33.407 17:55:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.407 17:55:00 -- common/autotest_common.sh@10 -- # set +x 00:20:33.407 17:55:00 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:33.407 17:55:00 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:33.407 17:55:00 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:33.407 17:55:00 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:33.407 17:55:00 -- spdk/autotest.sh@398 -- # hostname 00:20:33.407 17:55:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:33.667 geninfo: WARNING: invalid characters removed from testname! 00:20:55.620 17:55:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:58.917 17:55:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:00.296 17:55:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:02.203 17:55:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:04.744 17:55:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:06.654 17:55:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:08.623 17:55:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:08.623 17:55:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:08.623 17:55:35 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:08.623 17:55:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:08.623 17:55:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:08.623 17:55:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:08.623 + [[ -n 5427 ]] 00:21:08.623 + sudo kill 5427 00:21:08.633 [Pipeline] } 00:21:08.649 [Pipeline] // timeout 00:21:08.655 [Pipeline] } 00:21:08.670 [Pipeline] // stage 00:21:08.676 [Pipeline] } 00:21:08.692 [Pipeline] // catchError 00:21:08.703 [Pipeline] stage 00:21:08.705 [Pipeline] { (Stop VM) 00:21:08.720 [Pipeline] sh 00:21:09.004 + vagrant halt 00:21:11.546 ==> default: Halting domain... 00:21:19.699 [Pipeline] sh 00:21:19.988 + vagrant destroy -f 00:21:22.531 ==> default: Removing domain... 00:21:22.545 [Pipeline] sh 00:21:22.832 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:21:22.843 [Pipeline] } 00:21:22.859 [Pipeline] // stage 00:21:22.865 [Pipeline] } 00:21:22.882 [Pipeline] // dir 00:21:22.889 [Pipeline] } 00:21:22.903 [Pipeline] // wrap 00:21:22.909 [Pipeline] } 00:21:22.921 [Pipeline] // catchError 00:21:22.930 [Pipeline] stage 00:21:22.933 [Pipeline] { (Epilogue) 00:21:22.945 [Pipeline] sh 00:21:23.229 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:27.438 [Pipeline] catchError 00:21:27.440 [Pipeline] { 00:21:27.457 [Pipeline] sh 00:21:27.742 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:27.742 Artifacts sizes are good 00:21:27.752 [Pipeline] } 00:21:27.767 [Pipeline] // catchError 00:21:27.778 [Pipeline] archiveArtifacts 00:21:27.787 Archiving artifacts 00:21:27.892 [Pipeline] cleanWs 00:21:27.909 [WS-CLEANUP] Deleting project workspace... 00:21:27.909 [WS-CLEANUP] Deferred wipeout is used... 00:21:27.940 [WS-CLEANUP] done 00:21:27.942 [Pipeline] } 00:21:27.957 [Pipeline] // stage 00:21:27.962 [Pipeline] } 00:21:27.977 [Pipeline] // node 00:21:27.984 [Pipeline] End of Pipeline 00:21:28.023 Finished: SUCCESS